Nov 24 21:38:35 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 21:38:35 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:35 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:38:36 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 21:38:38 crc kubenswrapper[4767]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:38:38 crc kubenswrapper[4767]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 21:38:38 crc kubenswrapper[4767]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:38:38 crc kubenswrapper[4767]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:38:38 crc kubenswrapper[4767]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 21:38:38 crc kubenswrapper[4767]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.007410 4767 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019583 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019634 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019644 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019653 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019666 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019676 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019685 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019694 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019703 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019714 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019725 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019734 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019742 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019750 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019757 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019765 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019773 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019781 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019788 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019796 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019806 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019815 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019824 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019833 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019856 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019864 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019873 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019883 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019895 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019907 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019916 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019924 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019932 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019941 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019953 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019961 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019970 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019978 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019986 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.019995 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020003 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020011 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020019 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020027 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020035 4767 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020043 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020053 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020061 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020068 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020076 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020084 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020092 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020100 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020107 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020115 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020125 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020132 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020141 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020148 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020155 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020163 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020170 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020178 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020190 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020199 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020208 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020217 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020226 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020235 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020242 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.020251 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020467 4767 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020487 4767 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020500 4767 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020511 4767 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020523 4767 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020532 4767 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020544 4767 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020556 4767 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020565 4767 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020574 4767 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020586 4767 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020599 4767 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020609 4767 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020618 4767 flags.go:64] FLAG: --cgroup-root="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020626 4767 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020635 4767 flags.go:64] FLAG: --client-ca-file="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020644 4767 flags.go:64] FLAG: --cloud-config="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020653 4767 flags.go:64] FLAG: --cloud-provider="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020662 4767 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020673 4767 flags.go:64] FLAG: --cluster-domain="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020682 4767 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020691 4767 flags.go:64] FLAG: --config-dir="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020701 4767 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020712 4767 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020725 4767 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020735 4767 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020744 4767 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020754 4767 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020763 4767 flags.go:64] FLAG: --contention-profiling="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020772 4767 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020780 4767 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020790 4767 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020799 4767 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020810 4767 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020819 4767 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020828 4767 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020836 4767 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020845 4767 flags.go:64] FLAG: --enable-server="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020856 4767 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020867 4767 flags.go:64] FLAG: --event-burst="100" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020877 4767 flags.go:64] FLAG: --event-qps="50" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020886 4767 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020895 4767 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020904 4767 flags.go:64] FLAG: --eviction-hard="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020924 4767 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020934 4767 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020943 4767 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020954 4767 flags.go:64] FLAG: --eviction-soft="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020964 4767 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020973 4767 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020982 4767 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.020991 4767 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021000 4767 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021010 4767 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021019 4767 flags.go:64] FLAG: --feature-gates="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021030 4767 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021039 4767 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021048 4767 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021057 4767 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021067 4767 flags.go:64] FLAG: --healthz-port="10248" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021076 4767 flags.go:64] FLAG: --help="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021085 4767 flags.go:64] FLAG: --hostname-override="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021094 4767 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021103 4767 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021113 4767 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021122 4767 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021130 4767 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021139 4767 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021148 4767 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021157 4767 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021167 4767 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021176 4767 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021186 4767 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021195 4767 flags.go:64] FLAG: --kube-reserved="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021205 4767 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021214 4767 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021223 4767 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021231 4767 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021240 4767 flags.go:64] FLAG: --lock-file="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021249 4767 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021258 4767 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021292 4767 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021306 4767 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021317 4767 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021326 4767 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021335 4767 flags.go:64] FLAG: --logging-format="text" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021344 4767 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021354 4767 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021363 4767 flags.go:64] FLAG: --manifest-url="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021371 4767 flags.go:64] FLAG: --manifest-url-header="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021383 4767 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021392 4767 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021402 4767 flags.go:64] FLAG: --max-pods="110" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021411 4767 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021420 4767 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021430 4767 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021439 4767 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021447 4767 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021456 4767 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021465 4767 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021486 4767 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021495 4767 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021504 4767 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021513 4767 flags.go:64] FLAG: --pod-cidr="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021521 4767 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021535 4767 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021544 4767 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021553 4767 flags.go:64] FLAG: --pods-per-core="0" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021562 4767 flags.go:64] FLAG: --port="10250" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021571 4767 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021581 4767 flags.go:64] FLAG: --provider-id="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021590 4767 flags.go:64] FLAG: --qos-reserved="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021599 4767 flags.go:64] FLAG: --read-only-port="10255" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021609 4767 flags.go:64] FLAG: --register-node="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021618 4767 flags.go:64] FLAG: --register-schedulable="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021626 4767 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021641 4767 flags.go:64] FLAG: --registry-burst="10" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021650 4767 flags.go:64] FLAG: --registry-qps="5" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021659 4767 flags.go:64] FLAG: --reserved-cpus="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021670 4767 flags.go:64] FLAG: --reserved-memory="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021681 4767 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021691 4767 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021701 4767 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021710 4767 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021718 4767 flags.go:64] FLAG: --runonce="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021727 4767 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021736 4767 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021746 4767 flags.go:64] FLAG: --seccomp-default="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021754 4767 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021763 4767 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021772 4767 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021781 4767 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021790 4767 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021799 4767 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021808 4767 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021816 4767 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021825 4767 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021834 4767 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021846 4767 flags.go:64] FLAG: --system-cgroups="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021854 4767 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021870 4767 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021879 4767 flags.go:64] FLAG: --tls-cert-file="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021888 4767 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021900 4767 flags.go:64] FLAG: --tls-min-version="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021908 4767 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021919 4767 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021928 4767 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021938 4767 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021947 4767 flags.go:64] FLAG: --v="2" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021958 4767 flags.go:64] FLAG: --version="false" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021970 4767 flags.go:64] FLAG: --vmodule="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021981 4767 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.021991 4767 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022204 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022214 4767 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022227 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022238 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022247 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022256 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022265 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022297 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022305 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022313 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022322 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022331 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022342 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022352 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022361 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022370 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022378 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022387 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022394 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022402 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022410 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022418 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022426 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022434 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022442 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022451 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022459 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022467 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022475 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022483 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022491 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022498 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022506 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022513 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022521 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022531 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022541 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022551 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022560 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022568 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022576 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022585 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022593 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022601 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022609 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022618 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022633 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022640 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022648 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022656 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022663 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022672 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022679 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022687 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022695 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022703 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022711 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022719 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022727 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022734 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022745 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022755 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022765 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022774 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022783 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022791 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022799 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022807 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022815 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022823 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.022830 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.022843 4767 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.035936 4767 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.036012 4767 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036103 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036114 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036119 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036124 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036128 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036134 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036140 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036148 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036152 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036156 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036161 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036165 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036169 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036174 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036178 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036181 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036185 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036189 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036193 4767 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036197 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036201 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036205 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036208 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036212 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036215 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036219 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036223 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036227 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036232 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036237 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036240 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036244 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036247 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036255 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036261 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036279 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036285 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036291 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036296 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036302 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036308 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036313 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036318 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036325 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036330 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036337 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036341 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036346 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036351 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036356 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036359 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036364 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036369 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036373 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036377 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036381 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036385 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036389 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036393 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036396 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036400 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036404 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036408 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036412 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036416 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036422 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036425 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036455 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036459 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036463 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036468 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.036476 4767 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036636 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036642 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036646 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036651 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036654 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036658 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036662 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036666 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036671 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036676 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036680 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036684 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036688 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036692 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036695 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036700 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036705 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036709 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036713 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036717 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036720 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036724 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036727 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036731 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036734 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036738 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036743 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036747 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036750 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036754 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036758 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036761 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036765 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036768 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036773 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036776 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036782 4767 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036786 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036791 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036795 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036800 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036805 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036809 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036814 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036817 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036822 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036826 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036830 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036833 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036837 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036841 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036844 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036849 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036854 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036858 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036862 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036865 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036872 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036877 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036881 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036886 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036891 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036919 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036924 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036928 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036932 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036936 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036940 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036943 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036947 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.036952 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.036957 4767 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.037212 4767 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.042604 4767 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.042704 4767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.052377 4767 server.go:997] "Starting client certificate rotation" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.052407 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.058841 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-24 23:34:44.647214589 +0000 UTC Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.058979 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1h56m6.588240754s for next certificate rotation Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.102460 4767 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.104365 4767 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.137085 4767 log.go:25] "Validated CRI v1 runtime API" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.175994 4767 log.go:25] "Validated CRI v1 image API" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.178683 4767 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.185688 4767 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-21-33-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.185739 4767 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.216861 4767 manager.go:217] Machine: {Timestamp:2025-11-24 21:38:38.213963979 +0000 UTC m=+1.130947401 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:575c8020-5419-4b9b-904a-464e70414810 BootID:7cfbd01d-abd4-4a8c-9957-ee552fd378d0 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4b:da:a3 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4b:da:a3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:46:12:80 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:be:e2:b2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2d:15:64 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b4:23:d4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:73:cf:51:c1:39 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:24:f6:6b:1c:dd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.217398 4767 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.217778 4767 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.218556 4767 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.218825 4767 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.218867 4767 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.219111 4767 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.219124 4767 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.219700 4767 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.219744 4767 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.219941 4767 state_mem.go:36] "Initialized new in-memory state store" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.220051 4767 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.224831 4767 kubelet.go:418] "Attempting to sync node with API server" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.224876 4767 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.224927 4767 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.224945 4767 kubelet.go:324] "Adding apiserver pod source" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.224960 4767 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.229253 4767 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.230311 4767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.231568 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.231660 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.231762 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.231929 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.236001 4767 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238619 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238650 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238659 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238666 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238678 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238685 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238693 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238734 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238744 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238753 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238781 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.238790 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.239513 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.239999 4767 server.go:1280] "Started kubelet" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.240236 4767 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.240229 4767 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 21:38:38 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.242720 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.243883 4767 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.251284 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.251366 4767 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.252187 4767 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.252216 4767 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.252670 4767 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.252166 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:15:20.751003258 +0000 UTC Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.253613 4767 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.254108 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.254227 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.255389 4767 factory.go:55] Registering systemd factory Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.255694 4767 factory.go:221] Registration of the systemd container factory successfully Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.254602 4767 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b0f221d43cd67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 21:38:38.239968615 +0000 UTC m=+1.156951987,LastTimestamp:2025-11-24 21:38:38.239968615 +0000 UTC m=+1.156951987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.257686 4767 factory.go:153] Registering CRI-O factory Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.257729 4767 factory.go:221] Registration of the crio container factory successfully Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.257841 4767 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.257878 4767 factory.go:103] Registering Raw factory Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.257898 4767 manager.go:1196] Started watching for new ooms in manager Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.257970 4767 server.go:460] "Adding debug handlers to kubelet server" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.258659 4767 manager.go:319] Starting recovery of all containers Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.259963 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266462 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266538 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266560 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266578 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266599 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266618 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266655 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266674 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266702 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266721 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266737 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266754 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266771 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266791 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266808 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266827 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266869 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266886 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266902 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266919 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266935 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266951 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266975 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.266993 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267010 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267027 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267053 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267075 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267095 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267112 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267129 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267147 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267168 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267188 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267205 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267225 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267243 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267260 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267305 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267323 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267339 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267357 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267377 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267396 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267416 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267435 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267456 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267474 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267492 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.267514 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273222 4767 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273328 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273355 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273382 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273401 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273415 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273429 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273442 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273454 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273466 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273477 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273488 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273502 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273544 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273556 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273567 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273580 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273592 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273604 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273616 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273628 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273639 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273649 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273658 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273667 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273680 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273691 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273704 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273714 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273725 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273737 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273748 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273757 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273768 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273778 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273789 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273800 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273809 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273820 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273832 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273843 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273853 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273863 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273876 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273898 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273910 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273923 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273934 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273946 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273957 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273968 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273978 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.273990 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274003 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274015 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274033 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274046 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274061 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274074 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274087 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274099 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274110 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274124 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274136 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274151 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274164 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274181 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274193 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274209 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274220 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274234 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274247 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274259 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274286 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274296 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274308 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274319 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274332 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274344 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274356 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274366 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274378 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274390 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274400 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274412 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274424 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274434 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274445 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274457 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274470 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274482 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274495 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274507 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274518 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274530 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274542 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274554 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274565 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274577 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274589 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274603 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274614 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274628 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274640 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274652 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274663 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274674 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274685 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274709 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274722 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274734 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274747 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274758 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274770 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274781 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274793 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274804 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274815 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274825 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274836 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274848 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274859 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274871 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274884 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274894 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274905 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274917 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274927 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274937 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274949 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274961 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274972 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274983 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.274995 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275007 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275020 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275033 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275045 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275058 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275069 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275079 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275093 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275106 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275117 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275128 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275138 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275149 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275161 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275180 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275192 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275203 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275215 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275227 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275239 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275252 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275278 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275292 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275304 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275316 4767 reconstruct.go:97] "Volume reconstruction finished" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.275325 4767 reconciler.go:26] "Reconciler: start to sync state" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.286814 4767 manager.go:324] Recovery completed Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.303144 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.305544 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.305590 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.305602 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.307139 4767 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.307156 4767 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.307173 4767 state_mem.go:36] "Initialized new in-memory state store" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.308998 4767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.311460 4767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.312037 4767 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.312079 4767 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.312150 4767 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 21:38:38 crc kubenswrapper[4767]: W1124 21:38:38.312865 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.312935 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.353902 4767 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.378483 4767 policy_none.go:49] "None policy: Start" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.379702 4767 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.379749 4767 state_mem.go:35] "Initializing new in-memory state store" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.412257 4767 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.435901 4767 manager.go:334] "Starting Device Plugin manager" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.436380 4767 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.436410 4767 server.go:79] "Starting device plugin registration server" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.437060 4767 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.437087 4767 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.437334 4767 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.437531 4767 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.437542 4767 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.446884 4767 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.460657 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.538262 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.539956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.540015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.540030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.540074 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.540806 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.614206 4767 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.614362 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.616560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.616603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.616612 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.616753 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.616884 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.616940 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.617956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618328 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618380 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618398 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618579 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618785 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.618835 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620078 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620414 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620633 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620695 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.620855 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.621599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.621636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.621654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.621603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.621853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.621934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.622216 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.622461 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.622508 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.623557 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.623651 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.623710 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.623980 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.624061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.624129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.624447 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.624566 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.625315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.625351 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.625361 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.680289 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.680693 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.680892 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681002 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681070 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681095 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681146 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681168 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681186 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681250 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681287 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681303 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681319 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681370 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.681448 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.741862 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.743144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.743286 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.743355 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.743446 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.744635 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782306 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782385 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782417 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782448 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782473 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782536 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782567 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782594 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782623 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782620 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782646 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782670 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782706 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782713 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782741 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782752 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782721 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782824 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782765 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782787 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782791 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782768 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782881 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782820 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782806 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782930 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782955 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782994 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.783026 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.782821 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: E1124 21:38:38.861913 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.952795 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.960856 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.980739 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:38 crc kubenswrapper[4767]: I1124 21:38:38.997353 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.001761 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.013050 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a572fa77a144567808cb1e4eaa39378e3b2dc4bd2b1b866f981850bb5b78c13d WatchSource:0}: Error finding container a572fa77a144567808cb1e4eaa39378e3b2dc4bd2b1b866f981850bb5b78c13d: Status 404 returned error can't find the container with id a572fa77a144567808cb1e4eaa39378e3b2dc4bd2b1b866f981850bb5b78c13d Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.013549 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f86145eaa4d9f1a5103eecd8e459ad5b9efbdc22258211d5f85036a4b62ff745 WatchSource:0}: Error finding container f86145eaa4d9f1a5103eecd8e459ad5b9efbdc22258211d5f85036a4b62ff745: Status 404 returned error can't find the container with id f86145eaa4d9f1a5103eecd8e459ad5b9efbdc22258211d5f85036a4b62ff745 Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.022860 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e5fa263e98eb475e4f6296a07a4fa55f02d73c5c1efc74c189d89291db9181c8 WatchSource:0}: Error finding container e5fa263e98eb475e4f6296a07a4fa55f02d73c5c1efc74c189d89291db9181c8: Status 404 returned error can't find the container with id e5fa263e98eb475e4f6296a07a4fa55f02d73c5c1efc74c189d89291db9181c8 Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.031818 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3f266bf909588f92d3441e384fa61a0c42c065b22dcaf9a49e5cb566cb8e422d WatchSource:0}: Error finding container 3f266bf909588f92d3441e384fa61a0c42c065b22dcaf9a49e5cb566cb8e422d: Status 404 returned error can't find the container with id 3f266bf909588f92d3441e384fa61a0c42c065b22dcaf9a49e5cb566cb8e422d Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.033034 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ca80a764747c5e159fef13e418322d7a6d9a2623dbf796f88e77bcced49eb1b4 WatchSource:0}: Error finding container ca80a764747c5e159fef13e418322d7a6d9a2623dbf796f88e77bcced49eb1b4: Status 404 returned error can't find the container with id ca80a764747c5e159fef13e418322d7a6d9a2623dbf796f88e77bcced49eb1b4 Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.145433 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.148127 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.148164 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.148174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.148198 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:38:39 crc kubenswrapper[4767]: E1124 21:38:39.148644 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.191794 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:39 crc kubenswrapper[4767]: E1124 21:38:39.191887 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.244184 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.253191 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:09:48.041026727 +0000 UTC Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.253252 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 335h31m8.787777271s for next certificate rotation Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.317218 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca80a764747c5e159fef13e418322d7a6d9a2623dbf796f88e77bcced49eb1b4"} Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.317962 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3f266bf909588f92d3441e384fa61a0c42c065b22dcaf9a49e5cb566cb8e422d"} Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.318939 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e5fa263e98eb475e4f6296a07a4fa55f02d73c5c1efc74c189d89291db9181c8"} Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.320012 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f86145eaa4d9f1a5103eecd8e459ad5b9efbdc22258211d5f85036a4b62ff745"} Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.320995 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a572fa77a144567808cb1e4eaa39378e3b2dc4bd2b1b866f981850bb5b78c13d"} Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.432880 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:39 crc kubenswrapper[4767]: E1124 21:38:39.432997 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:39 crc kubenswrapper[4767]: E1124 21:38:39.663511 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.809722 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:39 crc kubenswrapper[4767]: W1124 21:38:39.809773 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:39 crc kubenswrapper[4767]: E1124 21:38:39.809840 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:39 crc kubenswrapper[4767]: E1124 21:38:39.809845 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.948758 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.950157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.950211 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.950230 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:39 crc kubenswrapper[4767]: I1124 21:38:39.950261 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:38:39 crc kubenswrapper[4767]: E1124 21:38:39.950818 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.243547 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.328540 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e"} Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.328625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0"} Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.328651 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d"} Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.330490 4767 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884" exitCode=0 Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.330607 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884"} Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.330671 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.332543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.332584 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.332658 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.336030 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5" exitCode=0 Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.336150 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5"} Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.336215 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.339412 4767 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="245368d49f9376a351e1d8e770b9360dca5522c07d3a67b79f4dc16c5fa6bb3b" exitCode=0 Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.339522 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"245368d49f9376a351e1d8e770b9360dca5522c07d3a67b79f4dc16c5fa6bb3b"} Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.339594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.339641 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.339659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.339684 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.342929 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.342988 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.343016 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.344604 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.350368 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.350393 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.350404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.350689 4767 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e" exitCode=0 Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.350750 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e"} Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.350893 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.353225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.353303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:40 crc kubenswrapper[4767]: I1124 21:38:40.353322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:40 crc kubenswrapper[4767]: W1124 21:38:40.814673 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:40 crc kubenswrapper[4767]: E1124 21:38:40.815085 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.243379 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 24 21:38:41 crc kubenswrapper[4767]: E1124 21:38:41.264480 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.358003 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.358041 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.358050 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.358062 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.360413 4767 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0c3024b418062d1bcaa0cb8df37847aaf1fd86d684f56eba176fd6d343630dc9" exitCode=0 Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.360501 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0c3024b418062d1bcaa0cb8df37847aaf1fd86d684f56eba176fd6d343630dc9"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.360659 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.362220 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.362250 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.362264 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.365131 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.365247 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.366700 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.366736 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.366751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.372891 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.373031 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.376526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.376554 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.376562 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.380922 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.380959 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.380975 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0"} Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.381057 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.381754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.381776 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.381784 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.551131 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.552317 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.552383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.552399 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:41 crc kubenswrapper[4767]: I1124 21:38:41.552440 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:38:41 crc kubenswrapper[4767]: E1124 21:38:41.553183 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.385341 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.387465 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="953ba71bc4cef1ebfaa7cbf64abdc48094ae19ecc8b09303d8aff226b1366c39" exitCode=255 Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.387563 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"953ba71bc4cef1ebfaa7cbf64abdc48094ae19ecc8b09303d8aff226b1366c39"} Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.387630 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389220 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389726 4767 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a236547ad8da04f7a0e03d8fca2c000a1353a7699f847f48d361779e02eef40f" exitCode=0 Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389812 4767 scope.go:117] "RemoveContainer" containerID="953ba71bc4cef1ebfaa7cbf64abdc48094ae19ecc8b09303d8aff226b1366c39" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389882 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389911 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a236547ad8da04f7a0e03d8fca2c000a1353a7699f847f48d361779e02eef40f"} Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389933 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389976 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.389940 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.390045 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.390974 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391021 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391136 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391162 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.391339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.395780 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.395875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.395889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:42 crc kubenswrapper[4767]: I1124 21:38:42.419760 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.393596 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.395585 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88"} Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.395646 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.396330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.396368 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.396381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.398618 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.398625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d0041294f0b061bb671f17ba20c1a0fd49fc34f5fbed1331cd9c62d684790e5a"} Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.398729 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3b7e5779bfceae26c9252531fdded6cebf637ea56f1c809a826e0d46434e9d0b"} Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.398740 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7c9ddb939de1ca54c392b7b2c141bf0842985c8ad93e9ed6182553c231c180a7"} Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.398750 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"977c946ea60eebceaa25e0313dc1eca84808035661a054587965b7f6cca3d539"} Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.399120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.399141 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.399150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.575105 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.575254 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.576348 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.576413 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.576434 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:43 crc kubenswrapper[4767]: I1124 21:38:43.593099 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.405561 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"448ebb34ee7ccd53165446a6967ac637c7525b26c566161360466da5a6fc3d82"} Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.405624 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.405655 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.405687 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.406749 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.406793 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.406811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.407011 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.407061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.407082 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.754428 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.759117 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.759217 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.759245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:44 crc kubenswrapper[4767]: I1124 21:38:44.759349 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.409591 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.409653 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.409743 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.411060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.411124 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.411151 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.411352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.411407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.411430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:45 crc kubenswrapper[4767]: I1124 21:38:45.568422 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.190329 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.413315 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.414452 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.414507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.414523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.562023 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.562243 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.562335 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.563411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.563441 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:46 crc kubenswrapper[4767]: I1124 21:38:46.563454 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.343919 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.344141 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.345808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.345870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.345890 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.416448 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.418603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.418664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.418682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.469567 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.469811 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.471525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.471596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.471617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:47 crc kubenswrapper[4767]: I1124 21:38:47.484540 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.221082 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.319410 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.319678 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.321066 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.321113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.321129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.418733 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.419808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.419846 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:48 crc kubenswrapper[4767]: I1124 21:38:48.419855 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:48 crc kubenswrapper[4767]: E1124 21:38:48.447757 4767 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 21:38:49 crc kubenswrapper[4767]: I1124 21:38:49.421196 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:49 crc kubenswrapper[4767]: I1124 21:38:49.422563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:49 crc kubenswrapper[4767]: I1124 21:38:49.422609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:49 crc kubenswrapper[4767]: I1124 21:38:49.422621 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:49 crc kubenswrapper[4767]: I1124 21:38:49.425495 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:50 crc kubenswrapper[4767]: I1124 21:38:50.344521 4767 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 21:38:50 crc kubenswrapper[4767]: I1124 21:38:50.344627 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:38:50 crc kubenswrapper[4767]: I1124 21:38:50.424792 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:50 crc kubenswrapper[4767]: I1124 21:38:50.426144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:50 crc kubenswrapper[4767]: I1124 21:38:50.426238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:50 crc kubenswrapper[4767]: I1124 21:38:50.426258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.243615 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.420615 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.420730 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 21:38:52 crc kubenswrapper[4767]: W1124 21:38:52.500913 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.501066 4767 trace.go:236] Trace[221647870]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:38:42.499) (total time: 10001ms): Nov 24 21:38:52 crc kubenswrapper[4767]: Trace[221647870]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:38:52.500) Nov 24 21:38:52 crc kubenswrapper[4767]: Trace[221647870]: [10.001773517s] [10.001773517s] END Nov 24 21:38:52 crc kubenswrapper[4767]: E1124 21:38:52.501106 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 21:38:52 crc kubenswrapper[4767]: W1124 21:38:52.609015 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.609181 4767 trace.go:236] Trace[2033189371]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:38:42.608) (total time: 10000ms): Nov 24 21:38:52 crc kubenswrapper[4767]: Trace[2033189371]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (21:38:52.608) Nov 24 21:38:52 crc kubenswrapper[4767]: Trace[2033189371]: [10.000857373s] [10.000857373s] END Nov 24 21:38:52 crc kubenswrapper[4767]: E1124 21:38:52.609216 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 21:38:52 crc kubenswrapper[4767]: W1124 21:38:52.686523 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.686648 4767 trace.go:236] Trace[275872227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:38:42.685) (total time: 10001ms): Nov 24 21:38:52 crc kubenswrapper[4767]: Trace[275872227]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:38:52.686) Nov 24 21:38:52 crc kubenswrapper[4767]: Trace[275872227]: [10.001479395s] [10.001479395s] END Nov 24 21:38:52 crc kubenswrapper[4767]: E1124 21:38:52.686678 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.877191 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.877298 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.883086 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 21:38:52 crc kubenswrapper[4767]: I1124 21:38:52.883146 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 21:38:53 crc kubenswrapper[4767]: I1124 21:38:53.601908 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]log ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]etcd ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/priority-and-fairness-filter ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-apiextensions-informers ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-apiextensions-controllers ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/crd-informer-synced ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-system-namespaces-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 24 21:38:53 crc kubenswrapper[4767]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 24 21:38:53 crc kubenswrapper[4767]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/bootstrap-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/start-kube-aggregator-informers ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-registration-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-discovery-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]autoregister-completion ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-openapi-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 24 21:38:53 crc kubenswrapper[4767]: livez check failed Nov 24 21:38:53 crc kubenswrapper[4767]: I1124 21:38:53.601975 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.289729 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.289892 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.291643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.291768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.291794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.312031 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.441715 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.442745 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.442787 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.442797 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:38:56 crc kubenswrapper[4767]: I1124 21:38:56.876261 4767 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 21:38:57 crc kubenswrapper[4767]: I1124 21:38:57.104110 4767 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 21:38:57 crc kubenswrapper[4767]: I1124 21:38:57.320898 4767 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 21:38:57 crc kubenswrapper[4767]: E1124 21:38:57.871208 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 21:38:57 crc kubenswrapper[4767]: I1124 21:38:57.873956 4767 trace.go:236] Trace[2065946472]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:38:47.020) (total time: 10853ms): Nov 24 21:38:57 crc kubenswrapper[4767]: Trace[2065946472]: ---"Objects listed" error: 10853ms (21:38:57.873) Nov 24 21:38:57 crc kubenswrapper[4767]: Trace[2065946472]: [10.853160353s] [10.853160353s] END Nov 24 21:38:57 crc kubenswrapper[4767]: I1124 21:38:57.873981 4767 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 21:38:57 crc kubenswrapper[4767]: I1124 21:38:57.876677 4767 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 21:38:57 crc kubenswrapper[4767]: E1124 21:38:57.877229 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.197708 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.203466 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.246213 4767 apiserver.go:52] "Watching apiserver" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.248646 4767 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.248865 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.249190 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.249352 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.249383 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.249649 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.249702 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.249632 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.250035 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.250188 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.250440 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.251169 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.253670 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.255508 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.255624 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.256377 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.256940 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.257617 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.258038 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.258632 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.259857 4767 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278408 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278486 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278521 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278543 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278566 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278587 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278611 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278634 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278659 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278683 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278708 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278731 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.278896 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279022 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279540 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279566 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279590 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279616 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279641 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279672 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279697 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279721 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279744 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279878 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.279922 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280023 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280037 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280058 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280082 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280108 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280131 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280153 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280179 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280201 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280229 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280253 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280297 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280326 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280352 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280377 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280404 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280427 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280450 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280492 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280514 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280539 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280566 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280590 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280615 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280638 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280661 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280684 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280741 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280764 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280760 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280790 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280804 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280815 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280841 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280868 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280891 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280913 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280937 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.280987 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281011 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281017 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281036 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281114 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281407 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281466 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281703 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281767 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281834 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.281927 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.282045 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.282184 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.282231 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.282477 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.282490 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.282494 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.282722 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283068 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283152 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283253 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283447 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283477 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283640 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283658 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283671 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283743 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283820 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283855 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283860 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283884 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283911 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283936 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283967 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.283994 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284020 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284043 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284096 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284101 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284142 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284170 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284202 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284231 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284259 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284313 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284319 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284342 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284373 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284401 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284426 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284453 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284479 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284506 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284531 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284556 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284582 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284607 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284633 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284665 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284694 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284721 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284745 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284774 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284818 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284849 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284935 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284970 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285001 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285028 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285054 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285079 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285106 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285131 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285160 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285190 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285218 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285244 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285288 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285314 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285336 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285362 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285386 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285412 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285433 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285454 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285479 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285501 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285529 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285557 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285586 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285608 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285631 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285659 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285687 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285712 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285737 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285762 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285786 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285810 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285834 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285862 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285889 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285916 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285942 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285970 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285997 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286022 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286044 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286067 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286093 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286118 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286144 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286168 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286192 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286221 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286246 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286286 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286321 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286347 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286371 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286399 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286427 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286479 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286512 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286539 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286568 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286592 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286621 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286646 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286670 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286698 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286723 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286748 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286775 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286802 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286835 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286863 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286888 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286917 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286946 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286969 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286997 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287022 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287048 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287075 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287101 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287129 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287159 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287186 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287213 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287242 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287286 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287318 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287344 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287367 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287394 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287417 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287443 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287465 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287492 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287516 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288366 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288407 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288494 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288522 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288550 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288577 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288606 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288641 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288672 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288703 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288736 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288766 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288797 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288823 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288853 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288881 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288873 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289003 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289024 4767 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289039 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289052 4767 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289068 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289082 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289097 4767 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289537 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290099 4767 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.293577 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.294401 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295042 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295397 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295444 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295471 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295750 4767 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295776 4767 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295797 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295819 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295841 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295870 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295895 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295917 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295941 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295963 4767 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295982 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296002 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296022 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296043 4767 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296064 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296083 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296104 4767 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296128 4767 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296151 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.301913 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284478 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284595 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284877 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.284962 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285504 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285550 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285853 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.285848 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286227 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286424 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286637 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286806 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.286945 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287154 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287154 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.287973 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288580 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288594 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288684 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288725 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288910 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.288906 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289113 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289200 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289335 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289548 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289556 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289624 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.289639 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.308902 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.309254 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.309789 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289668 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.289903 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290080 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290112 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290153 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290566 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290603 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290619 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290793 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.290839 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.291105 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.291163 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.291226 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.291500 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.291879 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.292009 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.292104 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.292170 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.292460 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.292540 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.292765 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.293018 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.293171 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.293182 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.293220 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.293399 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.293406 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295666 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.295979 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296002 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296036 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296168 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296380 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296599 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296638 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.296646 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.297288 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.297316 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.297936 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.298963 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.300982 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.301149 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.301160 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.301258 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.301338 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.301741 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.302338 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.302489 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:58.802440508 +0000 UTC m=+21.719423940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.310328 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:58.810257841 +0000 UTC m=+21.727241213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.310373 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:58.810366804 +0000 UTC m=+21.727350176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.302605 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.302821 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.303702 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.310460 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.306319 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.306742 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.307221 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.310587 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.310647 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.310734 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.310796 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.310858 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.311091 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.312520 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.312756 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.313058 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.313207 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.313815 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.313891 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.314306 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.314672 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.314705 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315018 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315304 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315311 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315347 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315373 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315432 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315570 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.315769 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.316536 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.317397 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.317448 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.318012 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.318036 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.318664 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.319365 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.319725 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.319934 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.319958 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.319973 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.320065 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:58.820044492 +0000 UTC m=+21.737028064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.320248 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.320293 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.320311 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.321390 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.321510 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.321714 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.321859 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.322957 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.323319 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.323953 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.324052 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.326353 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.327039 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.327364 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.329061 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.329453 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.329611 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.329944 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.329944 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.330314 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.330362 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330525 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330572 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330717 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330010 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330051 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330750 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330236 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.330839 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:58.830748691 +0000 UTC m=+21.747732263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.330993 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.331004 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.331069 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.331088 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.331347 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.332088 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.332546 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.332931 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.334717 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.334793 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.334840 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.335090 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.334750 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.335699 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.335943 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.336222 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.336595 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.336610 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.336595 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.336741 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.336979 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337019 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337021 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337246 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337254 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337566 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337558 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337731 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.337791 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.338792 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.341065 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.341131 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.341228 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.341346 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.341540 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.341801 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.341970 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.345081 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.345773 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.349262 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.351108 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.351944 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.354683 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.355935 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.356586 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.357444 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.359410 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.360330 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.361032 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.362294 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.363119 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.363353 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.364025 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.365016 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.365033 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.365938 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.367927 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.368491 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.369610 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.370129 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.370684 4767 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.371229 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.373354 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.373515 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.374041 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.375113 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.376685 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.377358 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.378318 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.378973 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.379990 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.380497 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.380814 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.381679 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.382507 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.383477 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.383979 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.384901 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.385448 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.386607 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.387066 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.387932 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.388583 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.389086 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.389665 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.390777 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.391455 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.396993 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397041 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397224 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397253 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397311 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397329 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397361 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397381 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397395 4767 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397407 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397421 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397434 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397447 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397459 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397472 4767 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397485 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397501 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397516 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397531 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397544 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397557 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397569 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397581 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397595 4767 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397607 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397627 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397640 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397653 4767 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397664 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397680 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397695 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397707 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397719 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397734 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397749 4767 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397761 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397774 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397786 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397797 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397811 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397823 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397834 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397846 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397857 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397870 4767 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397883 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397895 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397907 4767 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397918 4767 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397931 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397944 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397955 4767 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397966 4767 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397979 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397993 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398007 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398024 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398044 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398056 4767 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398068 4767 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398080 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398093 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398107 4767 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398119 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398132 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398145 4767 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398182 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398197 4767 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398210 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398222 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398237 4767 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398250 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398262 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398295 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398309 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398321 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398333 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398345 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398357 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398369 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398382 4767 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398394 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398440 4767 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398453 4767 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398465 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397711 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398480 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398496 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398509 4767 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398523 4767 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398536 4767 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398548 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.397854 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398561 4767 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398603 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398614 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398625 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398636 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398646 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398657 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398667 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398677 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398688 4767 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398698 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398708 4767 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398718 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398727 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398738 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398747 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398756 4767 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398765 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398774 4767 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398783 4767 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398793 4767 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398802 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398810 4767 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398820 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398829 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398838 4767 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398846 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398856 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398865 4767 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398874 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398885 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398894 4767 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398903 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398916 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398926 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398935 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398944 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398953 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398962 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398971 4767 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398980 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398991 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.398999 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399008 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399017 4767 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399026 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399034 4767 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399059 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399072 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399083 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399096 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399104 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399113 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399121 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399129 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399138 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399148 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399174 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399183 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399191 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399200 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399211 4767 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399221 4767 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399230 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399239 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399248 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399257 4767 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399284 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399293 4767 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399302 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399311 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399320 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399330 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.399339 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.400392 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.409680 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.419690 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.429730 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.437880 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.448151 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.448695 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.450319 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88" exitCode=255 Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.450433 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88"} Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.450519 4767 scope.go:117] "RemoveContainer" containerID="953ba71bc4cef1ebfaa7cbf64abdc48094ae19ecc8b09303d8aff226b1366c39" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.462643 4767 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.463017 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.464473 4767 scope.go:117] "RemoveContainer" containerID="3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.464493 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.464689 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.475233 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.488191 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.497707 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.519637 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.534506 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.544061 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.570306 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.580070 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:38:58 crc kubenswrapper[4767]: W1124 21:38:58.580680 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ce96f4585b1f1f3a35cfb9061e648557034abc98fdd10a441ce1b2c2b64b697f WatchSource:0}: Error finding container ce96f4585b1f1f3a35cfb9061e648557034abc98fdd10a441ce1b2c2b64b697f: Status 404 returned error can't find the container with id ce96f4585b1f1f3a35cfb9061e648557034abc98fdd10a441ce1b2c2b64b697f Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.590683 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.599518 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:58 crc kubenswrapper[4767]: W1124 21:38:58.607546 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3b8c46c73432d6b5d969d6ad3ab539d8fee6b48d453949ebc4cd82092331d425 WatchSource:0}: Error finding container 3b8c46c73432d6b5d969d6ad3ab539d8fee6b48d453949ebc4cd82092331d425: Status 404 returned error can't find the container with id 3b8c46c73432d6b5d969d6ad3ab539d8fee6b48d453949ebc4cd82092331d425 Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.614531 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.637773 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.658706 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.682314 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.701760 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://953ba71bc4cef1ebfaa7cbf64abdc48094ae19ecc8b09303d8aff226b1366c39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"message\\\":\\\"W1124 21:38:41.481284 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:41.481647 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020321 cert, and key in /tmp/serving-cert-1069606059/serving-signer.crt, /tmp/serving-cert-1069606059/serving-signer.key\\\\nI1124 21:38:41.677051 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:41.679683 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:38:41.679797 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:41.681030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069606059/tls.crt::/tmp/serving-cert-1069606059/tls.key\\\\\\\"\\\\nF1124 21:38:41.841944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.723596 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.735539 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.758664 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.802767 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.803008 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:59.802969108 +0000 UTC m=+22.719952510 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.903311 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.903645 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.903452 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.903668 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.903676 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.903688 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: I1124 21:38:58.903686 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.903731 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:59.903717597 +0000 UTC m=+22.820700959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.903962 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.904021 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.904043 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.904082 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.904130 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:59.904104398 +0000 UTC m=+22.821087810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.904161 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:59.90414697 +0000 UTC m=+22.821130382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.904191 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:38:58 crc kubenswrapper[4767]: E1124 21:38:58.904228 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:59.904219262 +0000 UTC m=+22.821202634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.312982 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.313102 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.454631 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3b8c46c73432d6b5d969d6ad3ab539d8fee6b48d453949ebc4cd82092331d425"} Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.456850 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49"} Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.456876 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33"} Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.456886 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cf4ac75b846e809513aef8e6b51ab8e32d0adbc4ff46d9fb214a459a93f3387d"} Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.458626 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64"} Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.458690 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ce96f4585b1f1f3a35cfb9061e648557034abc98fdd10a441ce1b2c2b64b697f"} Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.460693 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.463489 4767 scope.go:117] "RemoveContainer" containerID="3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88" Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.463594 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.470857 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.485545 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.500626 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.517802 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.535221 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.548439 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.561337 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.582809 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://953ba71bc4cef1ebfaa7cbf64abdc48094ae19ecc8b09303d8aff226b1366c39\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"message\\\":\\\"W1124 21:38:41.481284 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:41.481647 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020321 cert, and key in /tmp/serving-cert-1069606059/serving-signer.crt, /tmp/serving-cert-1069606059/serving-signer.key\\\\nI1124 21:38:41.677051 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:41.679683 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:38:41.679797 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:41.681030 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1069606059/tls.crt::/tmp/serving-cert-1069606059/tls.key\\\\\\\"\\\\nF1124 21:38:41.841944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.594841 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.609112 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.619668 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.630113 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.644485 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.659251 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.675882 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.686815 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.699967 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:38:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.811970 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:39:01.811954873 +0000 UTC m=+24.728938245 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.811897 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.913142 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.913202 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.913229 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:38:59 crc kubenswrapper[4767]: I1124 21:38:59.913257 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913375 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913409 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913411 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913421 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913433 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913448 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913496 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:01.913480885 +0000 UTC m=+24.830464257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913514 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:01.913507026 +0000 UTC m=+24.830490398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913517 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913552 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:01.913537037 +0000 UTC m=+24.830520499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913608 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:38:59 crc kubenswrapper[4767]: E1124 21:38:59.913633 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:01.913625179 +0000 UTC m=+24.830608651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.312309 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:00 crc kubenswrapper[4767]: E1124 21:39:00.312447 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.312562 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:00 crc kubenswrapper[4767]: E1124 21:39:00.312691 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.317934 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.319229 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.320813 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.321849 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.322689 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.323516 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 21:39:00 crc kubenswrapper[4767]: I1124 21:39:00.466300 4767 scope.go:117] "RemoveContainer" containerID="3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88" Nov 24 21:39:00 crc kubenswrapper[4767]: E1124 21:39:00.466526 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.313477 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.313621 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.469641 4767 scope.go:117] "RemoveContainer" containerID="3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.469679 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a"} Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.469869 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.484586 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.504001 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.523686 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.538401 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.552233 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.568098 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.582099 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.597091 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:01Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.829015 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.829179 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:39:05.829150339 +0000 UTC m=+28.746133741 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.930228 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.930296 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.930326 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:01 crc kubenswrapper[4767]: I1124 21:39:01.930348 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930361 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930479 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930442 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930570 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930583 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930459 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930618 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930626 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930670 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:05.930526037 +0000 UTC m=+28.847509409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930687 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:05.930678802 +0000 UTC m=+28.847662174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930719 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:05.930694162 +0000 UTC m=+28.847677544 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:01 crc kubenswrapper[4767]: E1124 21:39:01.930743 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:05.930736343 +0000 UTC m=+28.847719715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:02 crc kubenswrapper[4767]: I1124 21:39:02.313848 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:02 crc kubenswrapper[4767]: E1124 21:39:02.314001 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:02 crc kubenswrapper[4767]: I1124 21:39:02.314736 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:02 crc kubenswrapper[4767]: E1124 21:39:02.314793 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:02 crc kubenswrapper[4767]: I1124 21:39:02.419944 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:39:02 crc kubenswrapper[4767]: I1124 21:39:02.472058 4767 scope.go:117] "RemoveContainer" containerID="3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88" Nov 24 21:39:02 crc kubenswrapper[4767]: E1124 21:39:02.472183 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.313005 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:03 crc kubenswrapper[4767]: E1124 21:39:03.313146 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.599722 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2p8zc"] Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.600055 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.601169 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-74ffd"] Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.602083 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.602259 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.602401 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.602641 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gnz8t"] Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.602797 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mwpfp"] Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.602811 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.602917 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.603732 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.605973 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606076 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606213 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606265 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606368 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606385 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606441 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606588 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606680 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.606768 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.607896 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.608510 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.617568 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.628984 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.639321 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.652714 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.665982 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.677006 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.689078 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.702055 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.712203 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.722640 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.731428 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.740757 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743024 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a49c8848-a5f0-4e10-b053-8048beeaad5b-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743173 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-socket-dir-parent\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743288 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-cnibin\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743399 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f45850ec-6094-4a27-aa04-a35c002e6160-cni-binary-copy\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743500 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5s6\" (UniqueName: \"kubernetes.io/projected/4396a62d-6ac4-4999-9bbb-e14f20a5a9b3-kube-api-access-lp5s6\") pod \"node-resolver-2p8zc\" (UID: \"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\") " pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743593 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtxpb\" (UniqueName: \"kubernetes.io/projected/f45850ec-6094-4a27-aa04-a35c002e6160-kube-api-access-jtxpb\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743678 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-os-release\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743823 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-k8s-cni-cncf-io\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743953 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-conf-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.743996 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-etc-kubernetes\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744018 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a49c8848-a5f0-4e10-b053-8048beeaad5b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744068 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-mcd-auth-proxy-config\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744094 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-system-cni-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744114 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-cni-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744139 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-kubelet\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744170 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744197 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56s8\" (UniqueName: \"kubernetes.io/projected/a49c8848-a5f0-4e10-b053-8048beeaad5b-kube-api-access-m56s8\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744218 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-netns\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744287 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-cni-bin\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744330 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-multus-certs\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744419 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4396a62d-6ac4-4999-9bbb-e14f20a5a9b3-hosts-file\") pod \"node-resolver-2p8zc\" (UID: \"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\") " pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744445 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ppr\" (UniqueName: \"kubernetes.io/projected/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-kube-api-access-72ppr\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744466 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-os-release\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744492 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-cnibin\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744544 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-hostroot\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744570 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-system-cni-dir\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744599 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-cni-multus\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744624 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f45850ec-6094-4a27-aa04-a35c002e6160-multus-daemon-config\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744820 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-rootfs\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.744898 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-proxy-tls\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.755210 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.766334 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.777871 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.787535 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.798383 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.816221 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.831161 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845415 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4396a62d-6ac4-4999-9bbb-e14f20a5a9b3-hosts-file\") pod \"node-resolver-2p8zc\" (UID: \"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\") " pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-multus-certs\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845492 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ppr\" (UniqueName: \"kubernetes.io/projected/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-kube-api-access-72ppr\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845515 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-os-release\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845535 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-cnibin\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845553 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4396a62d-6ac4-4999-9bbb-e14f20a5a9b3-hosts-file\") pod \"node-resolver-2p8zc\" (UID: \"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\") " pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845596 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-hostroot\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845601 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-multus-certs\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845561 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-hostroot\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845639 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-system-cni-dir\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845659 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-cni-multus\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845691 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-rootfs\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845710 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-proxy-tls\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845731 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f45850ec-6094-4a27-aa04-a35c002e6160-multus-daemon-config\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845753 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a49c8848-a5f0-4e10-b053-8048beeaad5b-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845777 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-socket-dir-parent\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845798 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-cnibin\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845819 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f45850ec-6094-4a27-aa04-a35c002e6160-cni-binary-copy\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845841 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp5s6\" (UniqueName: \"kubernetes.io/projected/4396a62d-6ac4-4999-9bbb-e14f20a5a9b3-kube-api-access-lp5s6\") pod \"node-resolver-2p8zc\" (UID: \"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\") " pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845847 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-os-release\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845862 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtxpb\" (UniqueName: \"kubernetes.io/projected/f45850ec-6094-4a27-aa04-a35c002e6160-kube-api-access-jtxpb\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845893 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-os-release\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845914 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-k8s-cni-cncf-io\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845943 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-conf-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845965 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-etc-kubernetes\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.845988 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a49c8848-a5f0-4e10-b053-8048beeaad5b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846013 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-mcd-auth-proxy-config\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846059 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846075 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-cnibin\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846082 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-system-cni-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846103 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-cni-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846110 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-cnibin\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846123 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-kubelet\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846130 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-system-cni-dir\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846146 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56s8\" (UniqueName: \"kubernetes.io/projected/a49c8848-a5f0-4e10-b053-8048beeaad5b-kube-api-access-m56s8\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846152 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-cni-multus\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846166 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-netns\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846176 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-rootfs\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846186 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-cni-bin\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846261 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-cni-bin\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.846959 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f45850ec-6094-4a27-aa04-a35c002e6160-cni-binary-copy\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847307 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-os-release\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847340 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-k8s-cni-cncf-io\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847372 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-conf-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847368 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a49c8848-a5f0-4e10-b053-8048beeaad5b-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847419 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-etc-kubernetes\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847686 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-socket-dir-parent\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847751 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-var-lib-kubelet\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847799 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f45850ec-6094-4a27-aa04-a35c002e6160-multus-daemon-config\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847887 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a49c8848-a5f0-4e10-b053-8048beeaad5b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847931 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-host-run-netns\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.847967 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-system-cni-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.848251 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f45850ec-6094-4a27-aa04-a35c002e6160-multus-cni-dir\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.848545 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-mcd-auth-proxy-config\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.848881 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a49c8848-a5f0-4e10-b053-8048beeaad5b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.849831 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.857908 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-proxy-tls\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.868145 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56s8\" (UniqueName: \"kubernetes.io/projected/a49c8848-a5f0-4e10-b053-8048beeaad5b-kube-api-access-m56s8\") pod \"multus-additional-cni-plugins-mwpfp\" (UID: \"a49c8848-a5f0-4e10-b053-8048beeaad5b\") " pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.869603 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtxpb\" (UniqueName: \"kubernetes.io/projected/f45850ec-6094-4a27-aa04-a35c002e6160-kube-api-access-jtxpb\") pod \"multus-gnz8t\" (UID: \"f45850ec-6094-4a27-aa04-a35c002e6160\") " pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.870185 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.880756 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ppr\" (UniqueName: \"kubernetes.io/projected/7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0-kube-api-access-72ppr\") pod \"machine-config-daemon-74ffd\" (UID: \"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\") " pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.881675 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp5s6\" (UniqueName: \"kubernetes.io/projected/4396a62d-6ac4-4999-9bbb-e14f20a5a9b3-kube-api-access-lp5s6\") pod \"node-resolver-2p8zc\" (UID: \"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\") " pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.913583 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2p8zc" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.918712 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.924233 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gnz8t" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.928657 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" Nov 24 21:39:03 crc kubenswrapper[4767]: W1124 21:39:03.941486 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda49c8848_a5f0_4e10_b053_8048beeaad5b.slice/crio-0f20a2b9ccb469733f85a1327a384f92d283d2aab9470d17ce5e2f4b5afbdd77 WatchSource:0}: Error finding container 0f20a2b9ccb469733f85a1327a384f92d283d2aab9470d17ce5e2f4b5afbdd77: Status 404 returned error can't find the container with id 0f20a2b9ccb469733f85a1327a384f92d283d2aab9470d17ce5e2f4b5afbdd77 Nov 24 21:39:03 crc kubenswrapper[4767]: W1124 21:39:03.948774 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf45850ec_6094_4a27_aa04_a35c002e6160.slice/crio-ad0af68b18b9da9e0ad13212ed2b771389a13813b4611a24e69aaaec6f5b82df WatchSource:0}: Error finding container ad0af68b18b9da9e0ad13212ed2b771389a13813b4611a24e69aaaec6f5b82df: Status 404 returned error can't find the container with id ad0af68b18b9da9e0ad13212ed2b771389a13813b4611a24e69aaaec6f5b82df Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.988607 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ll767"] Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.989558 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.995102 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.995158 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.995364 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.995401 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.995613 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.995805 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 21:39:03 crc kubenswrapper[4767]: I1124 21:39:03.995978 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.010589 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.025584 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.040296 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048596 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-etc-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048638 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-log-socket\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048661 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-config\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048683 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-kubelet\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048704 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-var-lib-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048724 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41f27727-62e4-4386-a459-b26e471e1c0a-ovn-node-metrics-cert\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048753 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-env-overrides\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048769 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-script-lib\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048795 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-systemd\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048817 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-ovn\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048914 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048954 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-netd\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.048975 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff52b\" (UniqueName: \"kubernetes.io/projected/41f27727-62e4-4386-a459-b26e471e1c0a-kube-api-access-ff52b\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.049001 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-slash\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.049025 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.049090 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-systemd-units\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.049117 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-netns\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.049137 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-node-log\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.049156 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-bin\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.049179 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.064526 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.087554 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.112278 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.136857 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.149609 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-config\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.149662 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-kubelet\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.149679 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-var-lib-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.150178 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41f27727-62e4-4386-a459-b26e471e1c0a-ovn-node-metrics-cert\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.150235 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-env-overrides\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.152808 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-var-lib-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153558 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-config\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153633 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-script-lib\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153740 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-systemd\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153768 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-ovn\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153845 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153881 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-netd\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153915 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff52b\" (UniqueName: \"kubernetes.io/projected/41f27727-62e4-4386-a459-b26e471e1c0a-kube-api-access-ff52b\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153947 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-slash\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.153979 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154158 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-systemd-units\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154192 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-netns\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154241 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-node-log\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154496 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-bin\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154565 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154616 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-etc-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154642 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-env-overrides\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154646 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-log-socket\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154695 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-kubelet\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.154767 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-log-socket\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.155461 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-script-lib\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.155547 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-systemd\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.155590 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-ovn\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.155628 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.155665 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-netd\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156240 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-netns\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156362 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156396 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-etc-openvswitch\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156397 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-bin\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156419 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156450 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-node-log\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156468 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-slash\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.156487 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-systemd-units\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.161532 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.162014 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41f27727-62e4-4386-a459-b26e471e1c0a-ovn-node-metrics-cert\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.174921 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff52b\" (UniqueName: \"kubernetes.io/projected/41f27727-62e4-4386-a459-b26e471e1c0a-kube-api-access-ff52b\") pod \"ovnkube-node-ll767\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.177480 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.187623 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.202149 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.216001 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.277702 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.279013 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.279046 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.279057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.279186 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.285581 4767 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.285853 4767 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.286675 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.286703 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.286715 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.286730 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.286741 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.303867 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.303918 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.306778 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.306811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.306822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.306837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.306850 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.313217 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.313301 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.313376 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.313459 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.320118 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.323165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.323213 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.323224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.323244 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.323257 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.334668 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.337677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.337717 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.337729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.337747 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.337758 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.349550 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.352545 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.352579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.352591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.352607 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.352617 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.364785 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: E1124 21:39:04.364912 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.366106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.366141 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.366155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.366171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.366186 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.401982 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:04 crc kubenswrapper[4767]: W1124 21:39:04.411100 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f27727_62e4_4386_a459_b26e471e1c0a.slice/crio-67595d8270f2306b0a29b7b4225fafcd2d0c3a6741c5e8637559f5c5e43eed8e WatchSource:0}: Error finding container 67595d8270f2306b0a29b7b4225fafcd2d0c3a6741c5e8637559f5c5e43eed8e: Status 404 returned error can't find the container with id 67595d8270f2306b0a29b7b4225fafcd2d0c3a6741c5e8637559f5c5e43eed8e Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.469053 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.469097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.469131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.469149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.469161 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.477036 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"67595d8270f2306b0a29b7b4225fafcd2d0c3a6741c5e8637559f5c5e43eed8e"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.478344 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerStarted","Data":"cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.478384 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerStarted","Data":"0f20a2b9ccb469733f85a1327a384f92d283d2aab9470d17ce5e2f4b5afbdd77"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.480392 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.480414 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.480424 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"ce349ff9fd7dc5766deaa20fab3375f35951025268f256548b85409a9b3c1f20"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.481770 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2p8zc" event={"ID":"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3","Type":"ContainerStarted","Data":"1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.481797 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2p8zc" event={"ID":"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3","Type":"ContainerStarted","Data":"9226b2bc9b826b26c6e73e8648dae3a29351efbce6e448639348cce393bd7e99"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.483032 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerStarted","Data":"8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.483056 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerStarted","Data":"ad0af68b18b9da9e0ad13212ed2b771389a13813b4611a24e69aaaec6f5b82df"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.495569 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.514128 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.526573 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.536910 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.547717 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.566296 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.571216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.571254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.571281 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.571297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.571307 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.579736 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.591316 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.603994 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.617791 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.628780 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.641018 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.654909 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.666145 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.673201 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.673234 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.673243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.673257 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.673298 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.676951 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.687601 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.697247 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.706975 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.720033 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.730397 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.741233 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.748387 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.762856 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.773388 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.774838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.774875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.774886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.774903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.774915 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.785563 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.804730 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.877645 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.877680 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.877692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.877707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.877717 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.979996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.980043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.980054 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.980074 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:04 crc kubenswrapper[4767]: I1124 21:39:04.980086 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:04Z","lastTransitionTime":"2025-11-24T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.082514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.082573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.082595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.082618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.082633 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.184836 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.184875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.184887 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.184907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.184918 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.288758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.289230 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.289241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.289256 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.289279 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.313282 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:05 crc kubenswrapper[4767]: E1124 21:39:05.313422 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.391546 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.391595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.391610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.391637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.391651 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.488171 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b" exitCode=0 Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.488240 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.490928 4767 generic.go:334] "Generic (PLEG): container finished" podID="a49c8848-a5f0-4e10-b053-8048beeaad5b" containerID="cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f" exitCode=0 Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.490982 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerDied","Data":"cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.497140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.497175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.497185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.497203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.497217 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.518480 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.536250 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.549429 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.566846 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.580936 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.593008 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.602037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.602084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.602096 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.602114 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.602129 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.608780 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.618935 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.630940 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.647616 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.659694 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.671557 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.684462 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.699330 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.704073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.704117 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.704129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.704149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.704161 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.714583 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.729404 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.748483 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.765807 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.787704 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.801987 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.805696 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.805724 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.805735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.805751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.805761 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.820148 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.833151 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.845013 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.862470 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.866985 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wzmh2"] Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.867373 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.869113 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.869315 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.869497 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.869623 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.879827 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.898682 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.907967 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.907994 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.908003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.908017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.908027 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:05Z","lastTransitionTime":"2025-11-24T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.911525 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:39:05 crc kubenswrapper[4767]: E1124 21:39:05.911658 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:39:13.911643795 +0000 UTC m=+36.828627167 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.920805 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.961545 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.980181 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:05 crc kubenswrapper[4767]: I1124 21:39:05.998701 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.010105 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.010338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.010347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.010360 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.010369 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018536 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018725 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018768 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a611583d-9542-4d80-9e88-391ee935b033-serviceca\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018797 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018849 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018875 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a611583d-9542-4d80-9e88-391ee935b033-host\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018901 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.018929 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8vn\" (UniqueName: \"kubernetes.io/projected/a611583d-9542-4d80-9e88-391ee935b033-kube-api-access-jl8vn\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019028 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019078 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:14.019063052 +0000 UTC m=+36.936046514 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019093 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019098 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019118 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019129 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019148 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:14.019131204 +0000 UTC m=+36.936114586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019164 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:14.019156215 +0000 UTC m=+36.936139597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019195 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019206 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019215 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.019240 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:14.019231857 +0000 UTC m=+36.936215229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.034896 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.050830 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.061950 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.073024 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.112568 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.112853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.112879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.112890 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.112905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.112917 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.119901 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a611583d-9542-4d80-9e88-391ee935b033-host\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.119961 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl8vn\" (UniqueName: \"kubernetes.io/projected/a611583d-9542-4d80-9e88-391ee935b033-kube-api-access-jl8vn\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.119996 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a611583d-9542-4d80-9e88-391ee935b033-serviceca\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.120060 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a611583d-9542-4d80-9e88-391ee935b033-host\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.121018 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a611583d-9542-4d80-9e88-391ee935b033-serviceca\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.158196 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl8vn\" (UniqueName: \"kubernetes.io/projected/a611583d-9542-4d80-9e88-391ee935b033-kube-api-access-jl8vn\") pod \"node-ca-wzmh2\" (UID: \"a611583d-9542-4d80-9e88-391ee935b033\") " pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.172673 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.205486 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wzmh2" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.209906 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.215138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.215173 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.215184 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.215198 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.215209 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.251705 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: W1124 21:39:06.256000 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda611583d_9542_4d80_9e88_391ee935b033.slice/crio-bdd2762f095968d9e0ff206a51f9633182672d7e6182faead231949eb158ea83 WatchSource:0}: Error finding container bdd2762f095968d9e0ff206a51f9633182672d7e6182faead231949eb158ea83: Status 404 returned error can't find the container with id bdd2762f095968d9e0ff206a51f9633182672d7e6182faead231949eb158ea83 Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.301174 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.312414 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.312447 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.312563 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:06 crc kubenswrapper[4767]: E1124 21:39:06.312649 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.317643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.317678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.317687 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.317702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.317711 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.419990 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.420044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.420052 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.420066 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.420093 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.499802 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.499862 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.499873 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.499881 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.499889 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.499899 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.501708 4767 generic.go:334] "Generic (PLEG): container finished" podID="a49c8848-a5f0-4e10-b053-8048beeaad5b" containerID="4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7" exitCode=0 Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.501799 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerDied","Data":"4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.507114 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wzmh2" event={"ID":"a611583d-9542-4d80-9e88-391ee935b033","Type":"ContainerStarted","Data":"bdd2762f095968d9e0ff206a51f9633182672d7e6182faead231949eb158ea83"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.515223 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.524905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.524951 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.524969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.525110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.525130 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.531437 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.541797 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.556521 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.571191 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.584396 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.602389 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.616214 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.627519 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.627558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.627570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.627586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.627599 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.650165 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.691016 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.729558 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.730067 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.730101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.730110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.730123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.730132 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.773061 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.813955 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.832284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.832316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.832324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.832338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.832347 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.850743 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.934566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.934621 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.934637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.934675 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:06 crc kubenswrapper[4767]: I1124 21:39:06.934719 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:06Z","lastTransitionTime":"2025-11-24T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.037617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.037663 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.037673 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.037690 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.037701 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.140038 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.140077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.140114 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.140131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.140143 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.242609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.242654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.242666 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.242681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.242933 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.313115 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:07 crc kubenswrapper[4767]: E1124 21:39:07.313232 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.345231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.345284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.345295 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.345309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.345317 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.448195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.448231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.448239 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.448255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.448279 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.513536 4767 generic.go:334] "Generic (PLEG): container finished" podID="a49c8848-a5f0-4e10-b053-8048beeaad5b" containerID="36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92" exitCode=0 Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.513623 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerDied","Data":"36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.514977 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wzmh2" event={"ID":"a611583d-9542-4d80-9e88-391ee935b033","Type":"ContainerStarted","Data":"b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.531127 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.545562 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.550203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.550230 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.550239 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.550252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.550262 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.560206 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.577542 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.595044 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.606644 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.616259 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.629211 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.645325 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.652289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.652321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.652330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.652345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.652354 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.657061 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.672000 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.682352 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.693464 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.707602 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.716029 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.729000 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.740765 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.749887 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.754942 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.754967 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.754975 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.754988 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.754997 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.761021 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.771057 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.784634 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.805369 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.816330 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.834452 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.853621 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.857352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.857399 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.857408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.857424 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.857447 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.891289 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.931973 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.960195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.960292 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.960312 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.960338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.960357 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:07Z","lastTransitionTime":"2025-11-24T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:07 crc kubenswrapper[4767]: I1124 21:39:07.973398 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.063095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.063130 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.063138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.063154 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.063167 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.165916 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.165999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.166024 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.166050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.166069 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.268321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.268359 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.268375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.268394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.268408 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.312640 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:08 crc kubenswrapper[4767]: E1124 21:39:08.312758 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.312641 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:08 crc kubenswrapper[4767]: E1124 21:39:08.312847 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.326112 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.340369 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.358632 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.375245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.375337 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.375356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.375383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.375401 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.379746 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.398824 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.437141 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.457086 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.477474 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.477505 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.477513 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.477538 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.477547 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.482737 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.511490 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.518663 4767 generic.go:334] "Generic (PLEG): container finished" podID="a49c8848-a5f0-4e10-b053-8048beeaad5b" containerID="3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4" exitCode=0 Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.518711 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerDied","Data":"3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.523717 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.527877 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.541320 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.552551 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.565291 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.577764 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.587214 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.611071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.611110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.611124 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.611139 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.611151 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.613912 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.651154 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.690059 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.713520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.713665 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.713681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.713696 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.713708 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.740607 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.774319 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.815329 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.816952 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.816977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.816985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.816999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.817008 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.853044 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.890947 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.919647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.919683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.919696 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.919714 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.919725 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:08Z","lastTransitionTime":"2025-11-24T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.932653 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:08 crc kubenswrapper[4767]: I1124 21:39:08.977410 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.011811 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.024044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.024092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.024108 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.024129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.024145 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.050407 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.092629 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.126965 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.127012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.127026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.127047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.127063 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.229697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.229739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.229751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.229769 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.229781 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.313243 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:09 crc kubenswrapper[4767]: E1124 21:39:09.313576 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.331915 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.331956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.331966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.331983 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.331995 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.434678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.434742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.434754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.434776 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.434791 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.531723 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerDied","Data":"96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.531728 4767 generic.go:334] "Generic (PLEG): container finished" podID="a49c8848-a5f0-4e10-b053-8048beeaad5b" containerID="96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b" exitCode=0 Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.537514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.537566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.537583 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.537606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.537626 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.550810 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.575751 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.588933 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.607904 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.620973 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.636702 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.640427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.640457 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.640468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.640482 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.640494 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.652292 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.664844 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.674848 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.684219 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.696340 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.712828 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.723297 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.735251 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.742957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.743001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.743010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.743026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.743034 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.844799 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.844831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.844840 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.844853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.844863 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.948729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.948820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.948832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.948854 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:09 crc kubenswrapper[4767]: I1124 21:39:09.948868 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:09Z","lastTransitionTime":"2025-11-24T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.051504 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.051569 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.051588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.051614 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.051633 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.154896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.155034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.155073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.155102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.155123 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.257583 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.257622 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.257633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.257650 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.257662 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.312945 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.312971 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:10 crc kubenswrapper[4767]: E1124 21:39:10.313128 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:10 crc kubenswrapper[4767]: E1124 21:39:10.313204 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.360473 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.360500 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.360508 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.360521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.360530 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.462809 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.462871 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.462894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.462923 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.462946 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.541563 4767 generic.go:334] "Generic (PLEG): container finished" podID="a49c8848-a5f0-4e10-b053-8048beeaad5b" containerID="673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc" exitCode=0 Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.541806 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerDied","Data":"673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.560557 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.565742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.565958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.566088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.566306 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.566511 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.575693 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.594016 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.608778 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.626491 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.645562 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.661102 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.669928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.669974 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.669990 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.670010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.670036 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.677304 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.688985 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.706509 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.721137 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.735127 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.756472 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.766815 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.771844 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.771879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.771896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.771914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.771928 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.873456 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.873487 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.873499 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.873515 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.873526 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.975514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.975564 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.975580 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.975599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:10 crc kubenswrapper[4767]: I1124 21:39:10.975613 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:10Z","lastTransitionTime":"2025-11-24T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.080150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.080209 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.080231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.080256 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.080303 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.208960 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.209006 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.209022 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.209044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.209060 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.312517 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:11 crc kubenswrapper[4767]: E1124 21:39:11.312699 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.314764 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.314901 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.314960 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.315045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.315068 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.417940 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.418002 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.418019 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.418043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.418062 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.521099 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.521467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.521624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.521757 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.521877 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.549552 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" event={"ID":"a49c8848-a5f0-4e10-b053-8048beeaad5b","Type":"ContainerStarted","Data":"65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.554991 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.555599 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.555641 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.576852 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.596811 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.597841 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.599546 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.615847 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.624293 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.624320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.624331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.624345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.624356 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.634459 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.646808 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.661805 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.675248 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.689719 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.702459 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.717974 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.726814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.726844 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.726860 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.726879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.726894 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.733122 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.749416 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.760852 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.774363 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.790310 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.803215 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.813704 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.826488 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.829237 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.829288 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.829299 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.829316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.829329 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.838061 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.849081 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.865166 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.876514 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.890289 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.907619 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.920657 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.932352 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.932743 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.932831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.932847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.932873 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.932888 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:11Z","lastTransitionTime":"2025-11-24T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.946079 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:11 crc kubenswrapper[4767]: I1124 21:39:11.964328 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:11Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.035784 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.035832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.035840 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.035856 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.035867 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.139170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.139238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.139261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.139326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.139362 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.243155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.243204 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.243218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.243236 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.243249 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.313081 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.313128 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:12 crc kubenswrapper[4767]: E1124 21:39:12.313321 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:12 crc kubenswrapper[4767]: E1124 21:39:12.313468 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.345731 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.345792 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.345810 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.345832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.345848 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.449333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.449402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.449430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.449462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.449491 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.553004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.553069 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.553097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.553138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.553162 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.559003 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.656183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.656227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.656238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.656255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.656288 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.759900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.759950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.759966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.759991 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.760010 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.862871 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.862915 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.862927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.862944 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.862958 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.965231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.965289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.965303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.965318 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:12 crc kubenswrapper[4767]: I1124 21:39:12.965329 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:12Z","lastTransitionTime":"2025-11-24T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.067912 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.067966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.067988 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.068014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.068047 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.170700 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.170723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.170730 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.170743 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.170753 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.272933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.272967 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.272976 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.272992 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.273001 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.312403 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:13 crc kubenswrapper[4767]: E1124 21:39:13.312546 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.375049 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.375085 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.375093 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.375107 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.375116 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.477169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.477205 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.477214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.477226 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.477235 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.561915 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.580329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.580404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.580429 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.580460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.580536 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.684127 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.684165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.684175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.684193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.684202 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.786562 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.786610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.786622 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.786639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.786651 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.889477 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.889552 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.889573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.889598 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.889616 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.932890 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:39:13 crc kubenswrapper[4767]: E1124 21:39:13.933207 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:39:29.933168566 +0000 UTC m=+52.850151968 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.993255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.993317 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.993329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.993345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:13 crc kubenswrapper[4767]: I1124 21:39:13.993357 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:13Z","lastTransitionTime":"2025-11-24T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.033966 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.034026 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.034064 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.034093 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034175 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034176 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034206 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034221 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034228 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:30.034212043 +0000 UTC m=+52.951195415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034288 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:30.034253615 +0000 UTC m=+52.951236997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034335 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034370 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034383 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034377 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034604 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:30.034510662 +0000 UTC m=+52.951494074 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.034709 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:30.034624786 +0000 UTC m=+52.951608228 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.096865 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.096938 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.096957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.096985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.097003 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.200075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.200159 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.200177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.200202 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.200220 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.302468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.302520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.302529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.302550 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.302562 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.312746 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.312793 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.312927 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.313067 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.405227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.405323 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.405347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.405377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.405399 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.507786 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.507819 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.507830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.507845 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.507855 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.569136 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/0.log" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.573765 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17" exitCode=1 Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.573833 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.575164 4767 scope.go:117] "RemoveContainer" containerID="ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.580142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.580213 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.580236 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.580305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.580336 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.594594 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.599477 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.603583 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.603634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.603646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.603664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.603676 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.617108 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.620997 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.621773 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.621811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.621823 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.621842 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.621853 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.632965 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.634091 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.638352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.638425 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.638436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.638462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.638519 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.648826 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.650357 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.653570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.653605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.653618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.653635 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.653646 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.660434 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.664344 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: E1124 21:39:14.664450 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.665691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.665717 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.665726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.665738 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.665747 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.673212 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.683586 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.692733 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.702017 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.716501 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.728698 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.744941 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.757382 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.769761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.769801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.769812 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.769830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.769841 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.773584 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.871925 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.873127 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.873222 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.873354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.873452 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.975884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.975940 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.975956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.975978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:14 crc kubenswrapper[4767]: I1124 21:39:14.975995 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:14Z","lastTransitionTime":"2025-11-24T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.077957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.077997 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.078005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.078021 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.078037 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.180130 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.180172 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.180184 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.180200 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.180210 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.282972 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.283024 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.283042 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.283064 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.283079 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.312847 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:15 crc kubenswrapper[4767]: E1124 21:39:15.313015 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.386012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.386055 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.386071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.386094 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.386111 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.488830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.488884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.488898 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.488916 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.488929 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.579490 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/0.log" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.582943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.583083 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.591183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.591231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.591241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.591256 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.591287 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.605215 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.618339 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.631466 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.644459 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.667432 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.684302 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.693985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.694014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.694024 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.694039 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.694050 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.711371 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.724123 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.738642 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.753841 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.765138 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.784659 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.796977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.797030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.797041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.797060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.797072 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.801095 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.818595 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.900262 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.900320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.900332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.900348 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.900360 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:15Z","lastTransitionTime":"2025-11-24T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.957132 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg"] Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.957622 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.959348 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.962780 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.982594 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:15 crc kubenswrapper[4767]: I1124 21:39:15.996677 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.002029 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.002060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.002069 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.002085 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.002094 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.011406 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.037206 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.051350 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwqb\" (UniqueName: \"kubernetes.io/projected/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-kube-api-access-mwwqb\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.051458 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.051628 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.051691 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.054934 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.070508 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.086405 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.100454 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.105217 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.105727 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.105917 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.106179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.106461 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.116913 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.130387 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.147070 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.152801 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.154105 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.154441 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwwqb\" (UniqueName: \"kubernetes.io/projected/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-kube-api-access-mwwqb\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.154684 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.154839 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.155250 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.160299 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.169478 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.170300 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwwqb\" (UniqueName: \"kubernetes.io/projected/ad9f7d19-6d97-44a3-8918-41ba5bc39ef3-kube-api-access-mwwqb\") pod \"ovnkube-control-plane-749d76644c-8thvg\" (UID: \"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.182081 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.194427 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.205953 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.209079 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.209158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.209175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.209195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.209246 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.280386 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" Nov 24 21:39:16 crc kubenswrapper[4767]: W1124 21:39:16.299746 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad9f7d19_6d97_44a3_8918_41ba5bc39ef3.slice/crio-b4c05ee4cc70c802ce3112ac9aa22227cb617807687ef62de0d3c038c0aa8190 WatchSource:0}: Error finding container b4c05ee4cc70c802ce3112ac9aa22227cb617807687ef62de0d3c038c0aa8190: Status 404 returned error can't find the container with id b4c05ee4cc70c802ce3112ac9aa22227cb617807687ef62de0d3c038c0aa8190 Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.312538 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.312589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.312608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.312632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.312649 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.312750 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.312949 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:16 crc kubenswrapper[4767]: E1124 21:39:16.313219 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:16 crc kubenswrapper[4767]: E1124 21:39:16.313447 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.313532 4767 scope.go:117] "RemoveContainer" containerID="3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.415309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.415363 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.415382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.415405 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.415424 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.517758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.517805 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.517817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.517839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.517855 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.591806 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.593689 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.594521 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.596622 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/1.log" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.597284 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/0.log" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.599503 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8" exitCode=1 Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.599553 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.599580 4767 scope.go:117] "RemoveContainer" containerID="ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.600224 4767 scope.go:117] "RemoveContainer" containerID="bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8" Nov 24 21:39:16 crc kubenswrapper[4767]: E1124 21:39:16.600390 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.602616 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" event={"ID":"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3","Type":"ContainerStarted","Data":"999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.602678 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" event={"ID":"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3","Type":"ContainerStarted","Data":"b4c05ee4cc70c802ce3112ac9aa22227cb617807687ef62de0d3c038c0aa8190"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.608456 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.619780 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.619822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.619833 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.619850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.619863 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.625158 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.638225 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.651197 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.666563 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.678790 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.697718 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.722250 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.722304 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.722316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.722332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.722344 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.727125 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.739506 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.752340 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.779251 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.790055 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.806413 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.820634 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.825010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.825203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.825396 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.825493 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.825569 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.832084 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.842546 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.865623 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.878397 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.894248 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.909041 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.920956 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.927381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.927425 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.927437 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.927454 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.927468 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:16Z","lastTransitionTime":"2025-11-24T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.932661 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.944850 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.956620 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.974195 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.983901 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:16 crc kubenswrapper[4767]: I1124 21:39:16.996362 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.007031 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.016804 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.028629 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.029467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.029502 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.029514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.029530 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.029540 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.131927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.131985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.132003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.132028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.132045 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.235511 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.235566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.235577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.235596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.235610 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.313353 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:17 crc kubenswrapper[4767]: E1124 21:39:17.313829 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.338489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.338747 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.338892 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.339211 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.339319 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.442237 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.442298 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.442311 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.442328 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.442340 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.477195 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-q9q7p"] Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.478071 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:17 crc kubenswrapper[4767]: E1124 21:39:17.478230 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.492443 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.506701 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.522237 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.544317 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.545603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.545636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.545650 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.545669 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.545683 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.558308 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.572144 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9ld\" (UniqueName: \"kubernetes.io/projected/3b3c69a6-6755-47bf-8e68-d70004d77621-kube-api-access-9b9ld\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.572201 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.572423 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.583523 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.596430 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.607014 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/1.log" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.610872 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.611716 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" event={"ID":"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3","Type":"ContainerStarted","Data":"14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.628669 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.648098 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.648150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.648159 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.648178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.648223 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.666121 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.673695 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9ld\" (UniqueName: \"kubernetes.io/projected/3b3c69a6-6755-47bf-8e68-d70004d77621-kube-api-access-9b9ld\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.673757 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:17 crc kubenswrapper[4767]: E1124 21:39:17.673886 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:17 crc kubenswrapper[4767]: E1124 21:39:17.673967 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:18.173946469 +0000 UTC m=+41.090929841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.679136 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.690139 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9ld\" (UniqueName: \"kubernetes.io/projected/3b3c69a6-6755-47bf-8e68-d70004d77621-kube-api-access-9b9ld\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.691087 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.704634 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.716488 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.729682 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.744895 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.751051 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.751079 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.751087 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.751100 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.751109 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.757898 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.771653 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.784472 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.801146 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.815200 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.827876 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.844732 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.853222 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.853307 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.853319 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.853334 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.853348 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.855778 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.867463 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.881633 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.892545 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.903995 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.916426 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.928965 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.944353 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:17Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.955946 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.955985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.956001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.956021 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:17 crc kubenswrapper[4767]: I1124 21:39:17.956037 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:17Z","lastTransitionTime":"2025-11-24T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.059638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.059686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.059699 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.059728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.059746 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.162593 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.162642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.162655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.162679 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.162703 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.178706 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:18 crc kubenswrapper[4767]: E1124 21:39:18.178902 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:18 crc kubenswrapper[4767]: E1124 21:39:18.178970 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:19.178952772 +0000 UTC m=+42.095936154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.265871 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.265914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.265928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.265950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.265967 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.313000 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:18 crc kubenswrapper[4767]: E1124 21:39:18.313157 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.313690 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:18 crc kubenswrapper[4767]: E1124 21:39:18.313841 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.329472 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.340786 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.350596 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.364652 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.368184 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.368218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.368229 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.368244 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.368255 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.374770 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.385090 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.403530 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.415489 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.428071 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.441494 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.457366 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.470654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.470709 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.470726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.470750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.470767 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.473765 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.488475 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.501439 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.518554 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.533859 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.573466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.573535 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.573559 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.573589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.573611 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.675771 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.675835 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.675851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.675876 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.675894 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.778541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.778585 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.778600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.778618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.778630 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.881921 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.882586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.882618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.882643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.882662 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.985908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.985981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.986005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.986037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:18 crc kubenswrapper[4767]: I1124 21:39:18.986059 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:18Z","lastTransitionTime":"2025-11-24T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.090070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.090120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.090142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.090169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.090190 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.189960 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:19 crc kubenswrapper[4767]: E1124 21:39:19.190171 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:19 crc kubenswrapper[4767]: E1124 21:39:19.190249 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:21.190228205 +0000 UTC m=+44.107211607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.192739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.192790 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.192807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.192832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.192868 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.296464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.296561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.296582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.296609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.296629 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.313209 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.313209 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:19 crc kubenswrapper[4767]: E1124 21:39:19.313507 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:19 crc kubenswrapper[4767]: E1124 21:39:19.313656 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.399577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.399633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.399650 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.399675 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.399697 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.503241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.503326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.503343 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.503367 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.503384 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.606349 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.606424 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.606449 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.606483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.606513 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.709441 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.709485 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.709500 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.709519 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.709534 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.811526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.811589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.811606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.811632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.811649 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.914241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.914330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.914347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.914371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:19 crc kubenswrapper[4767]: I1124 21:39:19.914388 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:19Z","lastTransitionTime":"2025-11-24T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.018385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.018446 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.018469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.018501 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.018524 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.121639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.121718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.121743 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.121775 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.121800 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.224740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.224787 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.224801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.224821 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.224842 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.312786 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:20 crc kubenswrapper[4767]: E1124 21:39:20.312894 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.312793 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:20 crc kubenswrapper[4767]: E1124 21:39:20.313085 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.327227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.327296 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.327309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.327326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.327337 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.430204 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.430305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.430334 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.430356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.430374 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.532448 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.532507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.532524 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.532546 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.532567 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.636008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.636065 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.636076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.636101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.636116 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.739157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.739221 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.739243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.739303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.739321 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.842306 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.842365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.842382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.842406 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.842422 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.945177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.945251 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.945309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.945343 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:20 crc kubenswrapper[4767]: I1124 21:39:20.945366 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:20Z","lastTransitionTime":"2025-11-24T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.047889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.047964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.047988 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.048025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.048052 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.151620 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.151700 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.151719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.151748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.151771 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.213861 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:21 crc kubenswrapper[4767]: E1124 21:39:21.214126 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:21 crc kubenswrapper[4767]: E1124 21:39:21.214322 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:25.214247086 +0000 UTC m=+48.131230508 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.254866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.254933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.254950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.254974 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.254993 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.312733 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:21 crc kubenswrapper[4767]: E1124 21:39:21.312939 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.312733 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:21 crc kubenswrapper[4767]: E1124 21:39:21.313043 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.357544 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.357623 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.357659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.357694 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.357721 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.461308 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.461405 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.461425 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.461466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.461483 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.564043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.564082 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.564106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.564140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.564157 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.667136 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.667197 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.667209 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.667228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.667240 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.769935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.770008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.770025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.770052 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.770073 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.873149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.873214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.873226 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.873299 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.873315 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.976360 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.976459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.976517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.976541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:21 crc kubenswrapper[4767]: I1124 21:39:21.976593 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:21Z","lastTransitionTime":"2025-11-24T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.079470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.079515 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.079528 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.079555 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.079567 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.182165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.182214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.182225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.182243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.182255 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.284902 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.284948 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.284962 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.284978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.284989 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.312814 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.312957 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:22 crc kubenswrapper[4767]: E1124 21:39:22.313050 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:22 crc kubenswrapper[4767]: E1124 21:39:22.313119 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.387896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.388156 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.388172 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.388193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.388208 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.491792 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.491859 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.491875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.491900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.491917 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.594792 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.594851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.594904 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.594934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.594951 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.697646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.697691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.697702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.697722 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.697734 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.799976 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.800020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.800031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.800047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.800058 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.906563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.906609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.906624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.906644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:22 crc kubenswrapper[4767]: I1124 21:39:22.906658 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:22Z","lastTransitionTime":"2025-11-24T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.009723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.009768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.009782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.009801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.009815 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.113614 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.113662 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.113680 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.113703 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.113720 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.215587 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.215644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.215662 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.215686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.215704 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.313155 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:23 crc kubenswrapper[4767]: E1124 21:39:23.313338 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.313457 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:23 crc kubenswrapper[4767]: E1124 21:39:23.313585 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.319211 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.319255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.319295 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.319319 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.319340 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.421811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.421865 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.421885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.421913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.421936 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.524294 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.524351 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.524365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.524391 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.524408 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.627750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.627826 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.627843 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.627869 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.627887 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.731201 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.731253 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.731296 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.731322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.731339 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.834389 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.834544 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.834565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.834588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.834611 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.937331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.937396 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.937408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.937425 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:23 crc kubenswrapper[4767]: I1124 21:39:23.937436 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:23Z","lastTransitionTime":"2025-11-24T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.039807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.039866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.039888 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.039911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.039927 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.142588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.142642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.142657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.142681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.142697 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.245948 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.246001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.246018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.246043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.246060 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.312758 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.312818 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.312955 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.313065 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.348832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.348904 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.348925 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.348977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.348995 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.452161 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.452214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.452232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.452257 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.452305 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.555525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.555604 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.555627 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.555659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.555687 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.658694 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.658779 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.658797 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.658822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.658840 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.689876 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.689933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.689950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.689971 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.689987 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.710828 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:24Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.716581 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.716635 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.716654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.716678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.716696 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.737651 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:24Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.743238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.743326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.743350 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.743376 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.743397 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.764785 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:24Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.770770 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.771153 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.771377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.771599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.771780 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.795858 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:24Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.801487 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.801744 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.801907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.802097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.802332 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.829618 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:24Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:24 crc kubenswrapper[4767]: E1124 21:39:24.829858 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.834609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.834647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.834664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.834691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.834711 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.937470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.937503 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.937514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.937532 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:24 crc kubenswrapper[4767]: I1124 21:39:24.937543 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:24Z","lastTransitionTime":"2025-11-24T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.039753 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.039824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.039849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.039879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.039898 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.142722 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.142771 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.142789 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.142812 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.142831 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.245690 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.245760 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.245779 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.245803 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.245868 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.254874 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:25 crc kubenswrapper[4767]: E1124 21:39:25.255063 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:25 crc kubenswrapper[4767]: E1124 21:39:25.255136 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:33.255115189 +0000 UTC m=+56.172098571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.312259 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.312344 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:25 crc kubenswrapper[4767]: E1124 21:39:25.312460 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:25 crc kubenswrapper[4767]: E1124 21:39:25.312572 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.349193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.349301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.349326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.349354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.349374 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.452083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.452148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.452168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.452194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.452212 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.555785 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.555860 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.555882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.555922 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.555964 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.659426 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.659498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.659519 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.659543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.659678 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.762940 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.763000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.763017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.763041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.763059 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.866297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.866356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.866372 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.866400 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.866418 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.969895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.969952 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.969969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.969991 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:25 crc kubenswrapper[4767]: I1124 21:39:25.970009 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:25Z","lastTransitionTime":"2025-11-24T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.073795 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.073907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.073927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.073953 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.073972 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.177169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.177250 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.177306 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.177340 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.177364 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.281826 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.281918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.281947 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.281981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.282010 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.312836 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:26 crc kubenswrapper[4767]: E1124 21:39:26.313026 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.313107 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:26 crc kubenswrapper[4767]: E1124 21:39:26.313320 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.386209 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.386692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.386870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.386908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.386937 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.489601 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.489676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.489693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.489713 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.489730 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.593369 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.593436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.593453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.593477 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.593495 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.697050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.697125 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.697150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.697174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.697191 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.800918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.800954 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.800963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.800980 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.800990 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.903640 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.903738 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.903772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.903804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:26 crc kubenswrapper[4767]: I1124 21:39:26.903843 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:26Z","lastTransitionTime":"2025-11-24T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.006306 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.006354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.006366 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.006383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.006394 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.108958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.109008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.109021 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.109041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.109058 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.212366 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.212420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.212437 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.212460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.212477 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.313013 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.313048 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:27 crc kubenswrapper[4767]: E1124 21:39:27.313207 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:27 crc kubenswrapper[4767]: E1124 21:39:27.313439 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.315592 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.315647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.315666 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.315688 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.315707 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.419541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.419589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.419606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.419629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.419646 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.522920 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.522989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.523015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.523051 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.523086 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.626162 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.626235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.626246 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.626284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.626297 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.729357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.729394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.729405 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.729422 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.729433 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.832533 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.832608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.832636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.832665 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.832699 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.936418 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.936470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.936484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.936506 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:27 crc kubenswrapper[4767]: I1124 21:39:27.936520 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:27Z","lastTransitionTime":"2025-11-24T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.039993 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.040058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.040075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.040101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.040124 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.143855 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.143909 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.143929 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.143955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.143972 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.248325 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.248385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.248401 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.248426 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.248443 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.313499 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:28 crc kubenswrapper[4767]: E1124 21:39:28.313713 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.313772 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:28 crc kubenswrapper[4767]: E1124 21:39:28.313927 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.332115 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.336554 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.351779 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.351839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.351853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.351876 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.351891 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.356716 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.374742 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.392640 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.409234 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.424999 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.445352 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.453872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.453939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.453956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.453976 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.454020 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.466063 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.482935 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.494242 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.505593 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.519302 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.531623 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.546362 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.556543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.556610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.556629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.556657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.556676 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.570356 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.585769 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.606210 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.619524 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.629714 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.641014 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.659210 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.659805 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.659864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.659882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.659907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.659923 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.680879 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.694472 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.705290 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.719215 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.734067 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.749881 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.762171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.762216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.762229 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.762248 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.762260 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.763403 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.777366 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.789590 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.802575 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.817749 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.865183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.865238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.865252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.865291 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.865307 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.968375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.968426 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.968442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.968466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:28 crc kubenswrapper[4767]: I1124 21:39:28.968485 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:28Z","lastTransitionTime":"2025-11-24T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.071047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.071097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.071110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.071128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.071140 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.173944 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.174001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.174018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.174042 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.174060 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.277475 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.277549 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.277575 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.277609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.277634 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.312483 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.312489 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:29 crc kubenswrapper[4767]: E1124 21:39:29.312731 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:29 crc kubenswrapper[4767]: E1124 21:39:29.312853 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.380692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.380772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.380793 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.380814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.380868 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.484902 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.484959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.484971 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.484997 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.485012 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.587765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.587808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.587818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.587834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.587844 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.690166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.690226 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.690245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.690309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.690328 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.793498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.793572 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.793592 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.793619 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.793635 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.896735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.896825 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.896854 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.896880 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:29 crc kubenswrapper[4767]: I1124 21:39:29.896898 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:29.999911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:29.999966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:29.999983 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.000010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.000027 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:29Z","lastTransitionTime":"2025-11-24T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.010606 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.010772 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:40:02.01074147 +0000 UTC m=+84.927724872 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.103010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.103159 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.103183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.103207 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.103223 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.111668 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.111738 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.111811 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.111855 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.111988 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:40:02.111950322 +0000 UTC m=+85.028933734 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.111874 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112024 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112052 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112095 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112111 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:40:02.112085756 +0000 UTC m=+85.029069168 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112116 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112120 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112157 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112214 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112194 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:40:02.112169779 +0000 UTC m=+85.029153181 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.112365 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:40:02.112342804 +0000 UTC m=+85.029326206 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.206180 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.206460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.206479 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.206508 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.206527 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.311342 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.311421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.311457 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.311492 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.311519 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.313198 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.313358 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.313552 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:30 crc kubenswrapper[4767]: E1124 21:39:30.314434 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.414040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.414098 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.414122 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.414154 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.414179 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.473570 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.487545 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.493470 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.509126 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.516313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.516361 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.516375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.516393 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.516408 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.528307 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.545453 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.564077 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.582058 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.601260 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.617888 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.618866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.618919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.618934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.618958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.618974 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.634684 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.648984 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.668948 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.698907 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffb27d6b23a244948e0c9fa47554efb18ff76b8f562f165d27a34d3474d18c17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:14Z\\\",\\\"message\\\":\\\"4 21:39:13.740866 6067 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741142 6067 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.741406 6067 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 21:39:13.740826 6067 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:13.741655 6067 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:39:13.741681 6067 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:39:13.741735 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:13.741770 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:39:13.741811 6067 factory.go:656] Stopping watch factory\\\\nI1124 21:39:13.741844 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:13.742072 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:13.742082 6067 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:39:13.742105 6067 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:13.742111 6067 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.713805 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.722109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.722168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.722190 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.722228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.722250 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.730417 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.746215 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.760803 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:30Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.826017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.826083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.826101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.826125 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.826143 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.928798 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.928873 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.928893 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.928918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:30 crc kubenswrapper[4767]: I1124 21:39:30.928936 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:30Z","lastTransitionTime":"2025-11-24T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.031374 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.031442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.031459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.031482 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.031500 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.061707 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.063132 4767 scope.go:117] "RemoveContainer" containerID="bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.096999 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.111817 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.127629 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.134800 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.134858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.134874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.134896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.134913 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.142836 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.157027 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.176128 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.192953 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.205291 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.219028 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.237135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.237173 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.237182 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.237198 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.237210 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.239435 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.250223 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.263040 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.274949 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.295327 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.308991 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.312520 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.312549 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:31 crc kubenswrapper[4767]: E1124 21:39:31.312610 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:31 crc kubenswrapper[4767]: E1124 21:39:31.312680 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.320321 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.336386 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.339889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.339939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.339950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.339969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.339982 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.441840 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.441875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.441884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.441898 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.441908 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.544686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.544729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.544740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.544760 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.544770 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.651951 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.651987 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.651998 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.652012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.652022 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.662955 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/1.log" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.665617 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.666022 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.677247 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.693874 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.703909 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.714498 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.730053 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.738176 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.748330 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.757703 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.757780 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.758394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.758417 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.758430 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.764085 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.784614 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.796720 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.814352 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.824062 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.838826 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.852046 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.860155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.860197 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.860208 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.860223 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.860231 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.863073 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.874330 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.887376 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.961981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.962029 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.962044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.962063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:31 crc kubenswrapper[4767]: I1124 21:39:31.962076 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:31Z","lastTransitionTime":"2025-11-24T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.064791 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.064861 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.064885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.064910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.064928 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.167994 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.168064 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.168080 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.168105 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.168122 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.271394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.271453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.271487 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.271523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.271551 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.313163 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:32 crc kubenswrapper[4767]: E1124 21:39:32.313361 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.313397 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:32 crc kubenswrapper[4767]: E1124 21:39:32.313547 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.374444 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.374507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.374525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.374550 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.374567 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.479674 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.479729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.479751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.479779 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.479798 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.583134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.583201 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.583225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.583258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.583312 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.672647 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/2.log" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.673838 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/1.log" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.678616 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785" exitCode=1 Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.678683 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.678734 4767 scope.go:117] "RemoveContainer" containerID="bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.679922 4767 scope.go:117] "RemoveContainer" containerID="fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785" Nov 24 21:39:32 crc kubenswrapper[4767]: E1124 21:39:32.680171 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.690841 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.690888 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.690905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.690930 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.690947 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.702460 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.725367 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.745651 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.766980 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.787383 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.793850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.793919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.793941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.793970 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.793995 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.807481 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.826982 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.845210 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.862792 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.884533 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.901118 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.901169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.901186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.901215 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.901237 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:32Z","lastTransitionTime":"2025-11-24T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.902151 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.927170 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.949939 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.965657 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:32 crc kubenswrapper[4767]: I1124 21:39:32.998024 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf9f22cc171d4a873812e344bf235825a47af83838cdbbabe1248d3f546867a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"message\\\":\\\"t handler 8 for removal\\\\nI1124 21:39:15.443027 6220 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:39:15.443040 6220 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:39:15.443060 6220 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 21:39:15.443066 6220 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:39:15.443089 6220 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 21:39:15.443091 6220 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:39:15.443107 6220 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:39:15.443110 6220 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:39:15.443107 6220 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443160 6220 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:39:15.443198 6220 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 21:39:15.443207 6220 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:39:15.443200 6220 factory.go:656] Stopping watch factory\\\\nI1124 21:39:15.443230 6220 ovnkube.go:599] Stopped ovnkube\\\\nI1124 21:39:15.443250 6220 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 21:39:15.443336 6220 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.003248 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.003324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.003346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.003371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.003390 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.012051 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.026895 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.106443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.106506 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.106523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.106549 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.106571 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.209458 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.209506 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.209521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.209541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.209554 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.312332 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.312486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.312506 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.312513 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.312529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.312537 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: E1124 21:39:33.312552 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.312358 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:33 crc kubenswrapper[4767]: E1124 21:39:33.312756 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.349997 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:33 crc kubenswrapper[4767]: E1124 21:39:33.350177 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:33 crc kubenswrapper[4767]: E1124 21:39:33.350256 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:39:49.350232416 +0000 UTC m=+72.267215818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.415088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.415143 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.415160 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.415189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.415213 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.517950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.518020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.518037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.518062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.518079 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.621155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.621218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.621237 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.621261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.621316 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.685857 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/2.log" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.692077 4767 scope.go:117] "RemoveContainer" containerID="fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785" Nov 24 21:39:33 crc kubenswrapper[4767]: E1124 21:39:33.692406 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.712573 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.724774 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.724839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.724864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.724895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.724919 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.740422 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.761870 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.783197 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.800816 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.819765 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.826851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.826920 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.826946 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.826979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.827003 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.836250 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.850816 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.868426 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.880617 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.890046 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.904466 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.919450 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.928819 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.928849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.928856 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.928869 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.928879 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:33Z","lastTransitionTime":"2025-11-24T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.932982 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.943773 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.953675 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:33 crc kubenswrapper[4767]: I1124 21:39:33.963220 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:33Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.031610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.031666 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.031684 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.031713 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.031737 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.135035 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.135093 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.135112 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.135136 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.135154 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.244899 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.244963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.244980 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.245005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.245022 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.313324 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:34 crc kubenswrapper[4767]: E1124 21:39:34.313501 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.313571 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:34 crc kubenswrapper[4767]: E1124 21:39:34.313769 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.348851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.348915 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.348933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.348959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.348980 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.451990 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.452037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.452054 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.452073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.452087 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.554711 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.554768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.554788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.554812 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.554900 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.658025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.658130 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.658157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.658189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.658215 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.761756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.761823 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.761841 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.761864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.761881 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.864860 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.864944 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.864969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.865000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.865023 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.967664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.967730 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.967755 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.967789 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:34 crc kubenswrapper[4767]: I1124 21:39:34.967813 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:34Z","lastTransitionTime":"2025-11-24T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.071383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.071464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.071489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.071538 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.071569 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.175918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.175981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.175999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.176024 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.176040 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.211605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.211667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.211676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.211728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.211738 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.223570 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.227517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.227580 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.227600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.227626 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.227644 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.241984 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.245739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.245780 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.245795 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.245811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.245822 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.259673 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.263854 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.263890 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.263898 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.263914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.263926 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.274896 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.278978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.279024 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.279035 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.279051 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.279061 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.290490 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:35Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.290659 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.292234 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.292263 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.292289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.292303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.292313 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.313128 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.313199 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.313306 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:35 crc kubenswrapper[4767]: E1124 21:39:35.313463 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.395355 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.395409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.395439 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.395459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.395471 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.498291 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.498328 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.498338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.498353 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.498362 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.600993 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.601030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.601049 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.601069 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.601081 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.703353 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.703421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.703438 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.703464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.703481 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.806601 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.806705 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.806728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.806758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.806782 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.910435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.910492 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.910510 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.910534 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:35 crc kubenswrapper[4767]: I1124 21:39:35.910558 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:35Z","lastTransitionTime":"2025-11-24T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.015018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.015094 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.015113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.015139 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.015158 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.118395 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.118453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.118472 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.118501 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.118538 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.222137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.222211 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.222228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.222252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.222308 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.312372 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.312424 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:36 crc kubenswrapper[4767]: E1124 21:39:36.312485 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:36 crc kubenswrapper[4767]: E1124 21:39:36.312627 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.325320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.325380 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.325398 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.325421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.325440 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.427810 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.427858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.427875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.427902 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.427920 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.530457 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.530507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.530523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.530548 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.530565 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.633677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.633739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.633755 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.633781 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.633800 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.737444 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.737516 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.737535 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.737560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.737579 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.840751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.840816 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.840838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.840870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.840892 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.944601 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.944657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.944667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.944688 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:36 crc kubenswrapper[4767]: I1124 21:39:36.944699 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:36Z","lastTransitionTime":"2025-11-24T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.047756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.047800 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.047808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.047826 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.047835 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.151608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.151676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.151694 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.151719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.151737 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.255540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.255687 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.255711 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.255737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.255784 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.313100 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.313150 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:37 crc kubenswrapper[4767]: E1124 21:39:37.313259 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:37 crc kubenswrapper[4767]: E1124 21:39:37.313419 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.359235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.359293 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.359307 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.359324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.359336 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.461894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.461957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.461974 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.461999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.462016 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.565300 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.565384 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.565428 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.565456 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.565478 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.669040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.669125 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.669148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.669175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.669194 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.772064 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.772108 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.772123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.772140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.772153 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.875502 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.875550 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.875567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.875590 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.875611 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.978979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.979036 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.979058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.979091 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:37 crc kubenswrapper[4767]: I1124 21:39:37.979116 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:37Z","lastTransitionTime":"2025-11-24T21:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.082580 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.082656 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.082678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.082723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.082747 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.186197 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.186253 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.186318 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.186344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.186361 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.289411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.289483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.289495 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.289518 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.289532 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.312386 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.312442 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:38 crc kubenswrapper[4767]: E1124 21:39:38.312551 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:38 crc kubenswrapper[4767]: E1124 21:39:38.312810 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.331314 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.346276 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.358939 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.372122 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.387928 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.392137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.392171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.392182 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.392200 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.392211 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.408467 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.426905 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.442618 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.455458 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.474336 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.493580 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.495691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.495742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.495758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.495782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.495799 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.527054 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.538362 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.551415 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.571223 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.592064 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.598712 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.598764 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.598789 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.598818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.598839 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.615469 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:38Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.702833 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.702910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.702935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.702968 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.702990 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.805766 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.805832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.805846 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.805866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.805879 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.908629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.908686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.908704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.908731 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:38 crc kubenswrapper[4767]: I1124 21:39:38.908748 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:38Z","lastTransitionTime":"2025-11-24T21:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.012143 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.012197 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.012210 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.012228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.012241 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.115258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.115379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.115460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.115486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.115533 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.217617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.217692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.217701 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.217717 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.217727 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.312632 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.312651 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:39 crc kubenswrapper[4767]: E1124 21:39:39.312875 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:39 crc kubenswrapper[4767]: E1124 21:39:39.313020 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.320199 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.320289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.320307 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.320333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.320353 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.423427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.423523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.423540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.423560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.423572 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.526699 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.526792 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.526811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.526869 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.526887 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.629609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.629673 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.629690 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.629713 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.629732 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.733614 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.733662 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.733671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.733685 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.733694 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.837807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.837870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.837892 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.837921 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.837943 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.940843 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.940888 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.940900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.940919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:39 crc kubenswrapper[4767]: I1124 21:39:39.940933 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:39Z","lastTransitionTime":"2025-11-24T21:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.044232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.044392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.044404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.044460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.044476 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.147907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.147963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.147979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.147999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.148013 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.250405 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.250460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.250486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.250517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.250539 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.313539 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:40 crc kubenswrapper[4767]: E1124 21:39:40.313766 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.313840 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:40 crc kubenswrapper[4767]: E1124 21:39:40.314096 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.353762 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.353841 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.353862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.353886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.353904 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.457228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.457352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.457385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.457418 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.457440 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.560386 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.560430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.560441 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.560459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.560471 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.663745 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.663801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.663816 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.663842 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.663857 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.768146 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.768208 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.768228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.768257 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.768302 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.871170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.871233 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.871253 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.871301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.871321 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.974483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.974544 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.974567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.974596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:40 crc kubenswrapper[4767]: I1124 21:39:40.974618 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:40Z","lastTransitionTime":"2025-11-24T21:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.078087 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.078158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.078179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.078207 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.078227 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.180826 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.180886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.180903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.180928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.180946 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.283102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.283129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.283137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.283149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.283157 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.312808 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:41 crc kubenswrapper[4767]: E1124 21:39:41.312965 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.313029 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:41 crc kubenswrapper[4767]: E1124 21:39:41.313079 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.386035 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.386092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.386109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.386132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.386148 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.489737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.489805 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.489827 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.489863 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.489884 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.592743 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.592808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.592827 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.592851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.592872 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.697466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.697511 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.697523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.697540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.697551 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.800380 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.800430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.800445 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.800465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.800480 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.903858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.903932 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.903955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.903985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:41 crc kubenswrapper[4767]: I1124 21:39:41.904009 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:41Z","lastTransitionTime":"2025-11-24T21:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.007807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.007870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.007889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.007914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.007932 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.110553 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.110600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.110616 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.110631 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.110640 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.213670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.213748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.213768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.213797 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.213819 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.313007 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.313008 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:42 crc kubenswrapper[4767]: E1124 21:39:42.313215 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:42 crc kubenswrapper[4767]: E1124 21:39:42.313396 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.316414 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.316478 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.316495 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.316519 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.316539 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.418980 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.419010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.419018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.419031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.419039 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.520714 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.520748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.520757 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.520773 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.520781 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.623713 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.623777 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.623794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.623816 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.623833 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.726832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.726901 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.726914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.726932 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.726944 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.830001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.830057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.830074 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.830097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.830117 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.932786 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.932856 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.932877 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.932906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:42 crc kubenswrapper[4767]: I1124 21:39:42.932925 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:42Z","lastTransitionTime":"2025-11-24T21:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.036244 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.036302 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.036315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.036331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.036342 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.137849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.137907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.137924 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.137949 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.137966 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.240772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.240839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.240858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.240891 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.240909 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.312485 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.312601 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:43 crc kubenswrapper[4767]: E1124 21:39:43.312615 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:43 crc kubenswrapper[4767]: E1124 21:39:43.312873 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.344528 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.344657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.344732 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.344760 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.344809 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.448643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.448683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.448698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.448715 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.448727 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.550691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.550719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.550728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.550742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.550749 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.653161 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.653207 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.653219 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.653235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.653246 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.755489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.755526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.755536 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.755552 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.755565 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.857972 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.858017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.858028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.858044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.858056 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.960032 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.960758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.960801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.960823 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:43 crc kubenswrapper[4767]: I1124 21:39:43.960842 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:43Z","lastTransitionTime":"2025-11-24T21:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.063321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.063377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.063386 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.063402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.063415 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.165667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.165707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.165716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.165733 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.165744 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.269063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.269109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.269118 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.269135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.269145 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.312715 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.312770 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:44 crc kubenswrapper[4767]: E1124 21:39:44.312853 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:44 crc kubenswrapper[4767]: E1124 21:39:44.312967 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.371586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.371637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.371647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.371664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.371677 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.474872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.474910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.474919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.474935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.474945 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.577750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.577779 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.577787 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.577802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.577810 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.680359 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.680405 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.680418 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.680436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.680449 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.783611 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.783680 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.783691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.783710 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.783722 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.885718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.885761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.885771 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.885787 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.885798 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.988102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.988171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.988196 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.988225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:44 crc kubenswrapper[4767]: I1124 21:39:44.988248 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:44Z","lastTransitionTime":"2025-11-24T21:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.091344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.091415 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.091439 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.091468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.091489 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.194197 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.194318 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.194343 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.194366 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.194383 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.296721 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.296800 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.296814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.296840 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.296855 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.313296 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.313312 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.313512 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.313619 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.399086 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.399125 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.399134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.399149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.399159 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.501443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.501485 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.501496 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.501512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.501523 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.604246 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.604461 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.604491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.604683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.604716 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.655738 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.655773 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.655782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.655797 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.655807 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.676817 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.681591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.681638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.681655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.681681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.681698 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.716603 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.720380 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.720429 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.720450 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.720476 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.720495 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.741603 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.746628 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.746663 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.746674 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.746687 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.746696 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.756900 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.761195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.761233 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.761246 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.761286 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.761301 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.775782 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:45Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:45 crc kubenswrapper[4767]: E1124 21:39:45.776103 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.778639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.778822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.778996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.779158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.779372 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.882167 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.882422 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.882458 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.882481 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.882496 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.985486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.985530 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.985543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.985560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:45 crc kubenswrapper[4767]: I1124 21:39:45.985570 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:45Z","lastTransitionTime":"2025-11-24T21:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.088189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.088223 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.088234 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.088249 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.088259 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.191321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.191393 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.191416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.191446 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.191468 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.294303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.294346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.294357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.294374 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.294385 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.313027 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:46 crc kubenswrapper[4767]: E1124 21:39:46.313147 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.313027 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:46 crc kubenswrapper[4767]: E1124 21:39:46.313226 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.396459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.396488 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.396497 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.396509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.396519 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.499567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.499607 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.499615 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.499631 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.499641 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.602874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.602937 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.602955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.602980 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.602997 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.705564 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.705602 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.705615 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.705633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.705645 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.808753 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.808814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.808824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.808842 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.808853 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.911336 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.911375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.911386 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.911400 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:46 crc kubenswrapper[4767]: I1124 21:39:46.911412 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:46Z","lastTransitionTime":"2025-11-24T21:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.013165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.013206 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.013218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.013232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.013244 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.115778 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.115839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.115848 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.115864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.115874 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.218499 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.218560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.218577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.218600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.218619 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.312466 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.312557 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:47 crc kubenswrapper[4767]: E1124 21:39:47.312664 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:47 crc kubenswrapper[4767]: E1124 21:39:47.312766 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.313832 4767 scope.go:117] "RemoveContainer" containerID="fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785" Nov 24 21:39:47 crc kubenswrapper[4767]: E1124 21:39:47.314300 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.321100 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.321166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.321193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.321224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.321246 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.424181 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.424252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.424263 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.424304 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.424313 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.527114 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.527172 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.527189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.527214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.527231 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.630632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.630679 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.630689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.630707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.630718 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.734216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.734311 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.734337 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.734367 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.734388 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.837766 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.837840 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.837864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.837895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.837918 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.951229 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.951295 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.951314 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.951332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:47 crc kubenswrapper[4767]: I1124 21:39:47.951343 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:47Z","lastTransitionTime":"2025-11-24T21:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.053171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.053212 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.053225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.053237 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.053247 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.156028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.156083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.156100 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.156121 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.156137 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.258242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.258293 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.258311 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.258326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.258338 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.312437 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:48 crc kubenswrapper[4767]: E1124 21:39:48.312632 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.312999 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:48 crc kubenswrapper[4767]: E1124 21:39:48.313306 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.327561 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.344290 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.353949 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.361595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.361632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.361642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.361659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.361670 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.363678 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.379079 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.391060 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.402183 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.414237 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.427152 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.438529 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.449354 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.463184 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.463222 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.463234 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.463252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.463285 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.465431 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.477817 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.488226 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.498060 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.505397 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.514916 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:48Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.565727 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.565750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.565758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.565775 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.565786 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.668061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.668096 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.668104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.668118 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.668127 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.770528 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.770570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.770581 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.770597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.770608 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.872864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.872915 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.872923 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.872937 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.872945 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.974847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.974881 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.974889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.974903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:48 crc kubenswrapper[4767]: I1124 21:39:48.974911 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:48Z","lastTransitionTime":"2025-11-24T21:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.076915 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.076948 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.076959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.076975 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.076986 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.183297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.183355 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.183376 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.183395 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.183409 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.286330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.286381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.286392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.286408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.286418 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.312567 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.312586 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:49 crc kubenswrapper[4767]: E1124 21:39:49.312721 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:49 crc kubenswrapper[4767]: E1124 21:39:49.312854 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.389704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.389733 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.389740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.389754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.389763 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.422904 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:49 crc kubenswrapper[4767]: E1124 21:39:49.423083 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:49 crc kubenswrapper[4767]: E1124 21:39:49.423141 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:40:21.423119542 +0000 UTC m=+104.340102934 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.491996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.492038 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.492050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.492065 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.492076 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.594471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.594536 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.594547 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.594561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.594569 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.697014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.697087 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.697108 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.697130 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.697147 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.799303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.799338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.799350 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.799367 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.799381 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.901794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.901836 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.901847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.901866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:49 crc kubenswrapper[4767]: I1124 21:39:49.901877 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:49Z","lastTransitionTime":"2025-11-24T21:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.005054 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.005107 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.005121 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.005138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.005149 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.107588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.107647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.107665 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.107690 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.107707 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.210440 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.210484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.210495 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.210514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.210527 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.312454 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.312480 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:50 crc kubenswrapper[4767]: E1124 21:39:50.312579 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.312652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.312677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.312689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: E1124 21:39:50.312670 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.312703 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.312714 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.415878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.416048 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.416077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.416134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.416159 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.519004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.519065 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.519082 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.519106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.519122 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.622503 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.622553 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.622569 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.622599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.622645 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.725294 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.725337 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.725346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.725377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.725389 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.828883 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.828933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.828945 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.828964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.828980 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.931663 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.931693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.931707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.931723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:50 crc kubenswrapper[4767]: I1124 21:39:50.931732 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:50Z","lastTransitionTime":"2025-11-24T21:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.033972 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.034027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.034050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.034079 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.034100 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.137033 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.137103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.137120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.137144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.137161 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.240365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.240428 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.240439 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.240461 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.240476 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.313275 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.313362 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:51 crc kubenswrapper[4767]: E1124 21:39:51.313494 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:51 crc kubenswrapper[4767]: E1124 21:39:51.313650 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.343092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.343168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.343183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.343208 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.343225 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.445750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.445807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.445825 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.445847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.445861 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.548891 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.548963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.548977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.549003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.549014 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.652027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.652089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.652134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.652162 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.652183 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.758869 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.758950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.758962 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.758990 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.759007 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.759713 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/0.log" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.759804 4767 generic.go:334] "Generic (PLEG): container finished" podID="f45850ec-6094-4a27-aa04-a35c002e6160" containerID="8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049" exitCode=1 Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.759854 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerDied","Data":"8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.760517 4767 scope.go:117] "RemoveContainer" containerID="8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.777447 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.791687 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.809561 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.826409 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.841217 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.861466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.861520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.861536 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.861557 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.861574 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.862866 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.878782 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.902903 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.918921 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.933647 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.945969 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.955633 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.963712 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.963744 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.963754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.963770 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.963782 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:51Z","lastTransitionTime":"2025-11-24T21:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.970322 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.979703 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:51 crc kubenswrapper[4767]: I1124 21:39:51.993311 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.012891 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.025628 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.066233 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.066285 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.066333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.066355 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.066371 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.169692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.169765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.169788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.169818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.169840 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.271866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.271905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.271918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.271934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.271946 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.313413 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.313575 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:52 crc kubenswrapper[4767]: E1124 21:39:52.313662 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:52 crc kubenswrapper[4767]: E1124 21:39:52.313895 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.326415 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.375326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.375374 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.375385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.375401 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.375412 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.477888 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.477930 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.477939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.477955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.477967 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.580044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.580095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.580106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.580120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.580166 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.683151 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.683208 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.683225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.683254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.683314 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.765322 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/0.log" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.765440 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerStarted","Data":"702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.785334 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.785379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.785396 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.785419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.785439 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.787112 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.801345 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.825771 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.856590 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.873919 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.887928 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.888838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.888903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.888941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.888973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.888999 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.900334 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.914504 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.927088 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.939268 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.950634 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.961099 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.975895 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.986960 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48de0f8a-7fde-4bec-8374-73459b1c7d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.992043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.992097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.992107 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.992123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.992132 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:52Z","lastTransitionTime":"2025-11-24T21:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:52 crc kubenswrapper[4767]: I1124 21:39:52.997801 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.008683 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:53Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.021026 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:53Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.030706 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:53Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.094337 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.094416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.094439 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.094468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.094488 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.197556 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.197601 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.197614 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.197632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.197642 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.300312 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.300352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.300363 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.300379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.300391 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.313061 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.313084 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:53 crc kubenswrapper[4767]: E1124 21:39:53.313339 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:53 crc kubenswrapper[4767]: E1124 21:39:53.313424 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.403575 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.403660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.403706 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.403729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.403743 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.508052 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.508126 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.508149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.508178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.508258 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.610996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.611044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.611061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.611084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.611101 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.713833 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.713876 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.713887 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.713905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.713915 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.816935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.816989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.817005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.817030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.817047 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.920838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.920936 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.920963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.921012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:53 crc kubenswrapper[4767]: I1124 21:39:53.921038 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:53Z","lastTransitionTime":"2025-11-24T21:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.023353 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.023402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.023414 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.023431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.023443 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.126474 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.126562 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.126587 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.126616 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.126641 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.229111 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.229170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.229183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.229201 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.229214 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.313326 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.313406 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:54 crc kubenswrapper[4767]: E1124 21:39:54.313470 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:54 crc kubenswrapper[4767]: E1124 21:39:54.313645 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.331976 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.332021 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.332038 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.332060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.332076 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.434537 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.434604 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.434625 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.434651 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.434673 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.537449 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.537520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.537546 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.537575 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.537600 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.641023 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.641110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.641137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.641168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.641191 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.745063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.745105 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.745116 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.745135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.745147 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.846969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.847020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.847033 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.847051 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.847063 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.950315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.950360 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.950370 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.950385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:54 crc kubenswrapper[4767]: I1124 21:39:54.950396 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:54Z","lastTransitionTime":"2025-11-24T21:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.053906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.053978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.053994 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.054018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.054035 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.156772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.156878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.156896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.156924 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.156942 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.260525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.260597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.260620 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.260649 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.260671 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.312493 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.312513 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.312802 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.312987 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.363380 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.363446 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.363463 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.363486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.363502 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.466941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.466982 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.466995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.467013 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.467024 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.570321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.570354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.570364 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.570379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.570388 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.673113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.673160 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.673175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.673195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.673206 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.776491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.776543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.776559 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.776581 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.776597 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.797408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.797459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.797477 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.797496 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.797508 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.814565 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.818844 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.818912 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.818930 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.818953 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.818967 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.837865 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.842083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.842148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.842166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.842193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.842209 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.861678 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.866595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.866647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.866667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.866689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.866708 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.885551 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.890003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.890047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.890059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.890079 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.890094 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.908606 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:55 crc kubenswrapper[4767]: E1124 21:39:55.908784 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.910862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.910930 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.910942 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.910965 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:55 crc kubenswrapper[4767]: I1124 21:39:55.910977 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:55Z","lastTransitionTime":"2025-11-24T21:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.014321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.014386 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.014403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.014427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.014445 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.117671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.117728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.117748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.117772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.117789 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.220987 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.221058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.221077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.221104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.221121 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.313292 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.313368 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:56 crc kubenswrapper[4767]: E1124 21:39:56.313463 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:56 crc kubenswrapper[4767]: E1124 21:39:56.313535 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.324766 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.324835 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.324858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.324884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.324902 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.427346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.427404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.427423 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.427446 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.427464 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.530219 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.530345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.530364 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.530387 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.530407 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.633579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.633643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.633660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.633688 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.633704 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.736236 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.736344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.736368 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.736393 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.736410 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.838970 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.839002 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.839010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.839023 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.839031 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.942040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.942115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.942132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.942158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:56 crc kubenswrapper[4767]: I1124 21:39:56.942177 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:56Z","lastTransitionTime":"2025-11-24T21:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.045181 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.045234 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.045251 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.045313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.045332 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.148639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.148701 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.148717 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.148742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.148759 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.251946 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.252015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.252053 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.252085 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.252107 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.313083 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.313104 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:57 crc kubenswrapper[4767]: E1124 21:39:57.313560 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:57 crc kubenswrapper[4767]: E1124 21:39:57.313580 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.355572 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.355973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.356161 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.356629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.356874 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.459614 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.459704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.459721 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.459747 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.459797 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.563042 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.563519 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.563740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.563901 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.564033 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.666670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.666719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.666734 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.666874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.666889 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.771483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.771556 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.771573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.771599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.771619 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.874431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.874500 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.874526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.874555 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.874577 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.978040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.978097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.978114 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.978137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:57 crc kubenswrapper[4767]: I1124 21:39:57.978153 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:57Z","lastTransitionTime":"2025-11-24T21:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.081077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.081138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.081155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.081176 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.081189 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.184634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.184670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.184680 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.184697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.184709 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.288365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.288471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.288497 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.288530 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.288555 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.313006 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.313032 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:39:58 crc kubenswrapper[4767]: E1124 21:39:58.313248 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:39:58 crc kubenswrapper[4767]: E1124 21:39:58.313511 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.330295 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48de0f8a-7fde-4bec-8374-73459b1c7d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.347870 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.369122 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.388394 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.393767 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.393856 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.393885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.393919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.393958 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.405235 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.426325 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.442813 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.457183 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.478739 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.491835 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.496575 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.496628 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.496646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.496670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.496687 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.502948 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.514878 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.531388 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.545585 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.562162 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.575345 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.589690 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.600313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.600401 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.600416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.600435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.600461 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.612427 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:39:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.703297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.703697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.703800 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.703970 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.704049 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.806912 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.806948 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.806959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.806977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.806989 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.910028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.910085 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.910102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.910126 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:58 crc kubenswrapper[4767]: I1124 21:39:58.910147 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:58Z","lastTransitionTime":"2025-11-24T21:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.013309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.013594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.013689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.013782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.013878 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.116749 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.116829 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.116841 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.116863 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.116879 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.219632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.219946 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.220073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.220213 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.220319 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.312922 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:39:59 crc kubenswrapper[4767]: E1124 21:39:59.313053 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.313851 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:39:59 crc kubenswrapper[4767]: E1124 21:39:59.314128 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.322767 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.322798 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.322807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.322821 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.322830 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.425103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.425162 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.425179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.425243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.425261 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.527349 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.527390 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.527398 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.527411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.527421 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.630097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.630166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.630190 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.630224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.630245 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.734580 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.734630 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.734640 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.734658 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.734673 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.838199 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.838324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.838353 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.838385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.838409 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.941358 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.941421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.941442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.941467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:39:59 crc kubenswrapper[4767]: I1124 21:39:59.941485 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:39:59Z","lastTransitionTime":"2025-11-24T21:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.044657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.044708 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.044716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.044730 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.044739 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.148157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.148199 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.148210 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.148224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.148235 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.250664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.250739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.250756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.250786 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.250824 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.312918 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:00 crc kubenswrapper[4767]: E1124 21:40:00.313103 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.313238 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:00 crc kubenswrapper[4767]: E1124 21:40:00.313510 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.353579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.353638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.353654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.353677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.353694 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.455817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.455853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.455861 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.455876 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.455885 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.559220 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.559333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.559354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.559376 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.559390 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.661831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.661872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.661884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.661903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.661913 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.765004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.765072 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.765095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.765125 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.765148 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.868178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.868249 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.868356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.868396 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.868420 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.971153 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.971212 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.971240 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.971264 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:00 crc kubenswrapper[4767]: I1124 21:40:00.971303 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:00Z","lastTransitionTime":"2025-11-24T21:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.074588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.074637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.074649 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.074671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.074683 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.177227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.177325 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.177343 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.177363 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.177411 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.280467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.280504 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.280515 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.280532 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.280545 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.313135 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.313184 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:01 crc kubenswrapper[4767]: E1124 21:40:01.313259 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:01 crc kubenswrapper[4767]: E1124 21:40:01.313392 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.382917 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.382995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.383014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.383040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.383059 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.486472 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.486538 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.486565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.486594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.486611 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.589815 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.589862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.589876 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.589896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.589908 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.693351 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.693428 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.693450 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.693478 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.693499 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.796045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.796116 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.796148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.796177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.796198 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.899620 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.899686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.899702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.899726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:01 crc kubenswrapper[4767]: I1124 21:40:01.899747 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:01Z","lastTransitionTime":"2025-11-24T21:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.002745 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.002818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.002844 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.002878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.002904 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.088994 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.089218 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:06.089190342 +0000 UTC m=+149.006173754 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.106206 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.106310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.106324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.106339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.106349 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.190378 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.190464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190535 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.190524 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190603 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:41:06.190584262 +0000 UTC m=+149.107567634 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.190626 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190692 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190714 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190734 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190709 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190748 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190781 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:41:06.190771638 +0000 UTC m=+149.107755010 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190739 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190804 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190815 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:41:06.190792688 +0000 UTC m=+149.107776100 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.190845 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:41:06.190831059 +0000 UTC m=+149.107814471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.208998 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.209088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.209104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.209127 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.209144 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.312433 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.312549 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.312702 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.314820 4767 scope.go:117] "RemoveContainer" containerID="fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.316096 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: E1124 21:40:02.316435 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.316468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.316538 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.316572 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.316596 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.420705 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.420747 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.420755 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.420772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.420781 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.522582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.522887 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.522895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.522908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.522917 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.625050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.625142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.625164 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.625188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.625242 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.728714 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.728785 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.728807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.728835 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.728856 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.809250 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/2.log" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.812759 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.813487 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.832107 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.832166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.832180 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.832203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.832222 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.849594 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.868763 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.885199 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.901731 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.941265 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.941324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.941334 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.941350 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.941361 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:02Z","lastTransitionTime":"2025-11-24T21:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.955778 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.978695 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:02 crc kubenswrapper[4767]: I1124 21:40:02.992372 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.003355 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.012520 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.023619 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.035463 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.043646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.043683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.043693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.043708 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.043717 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.051711 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.062436 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.073335 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.082390 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.092873 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48de0f8a-7fde-4bec-8374-73459b1c7d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.104150 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.123661 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.146445 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.146493 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.146504 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.146521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.146532 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.248376 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.248413 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.248423 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.248437 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.248451 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.312663 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.312690 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:03 crc kubenswrapper[4767]: E1124 21:40:03.312783 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:03 crc kubenswrapper[4767]: E1124 21:40:03.312953 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.356039 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.356106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.356119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.356142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.356156 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.483897 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.483955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.483969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.483990 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.484003 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.587180 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.587253 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.587305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.587335 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.587358 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.690474 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.690548 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.690585 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.690616 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.690638 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.793950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.794027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.794044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.794069 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.794086 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.819953 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/3.log" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.820700 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/2.log" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.824010 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" exitCode=1 Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.824113 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.824218 4767 scope.go:117] "RemoveContainer" containerID="fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.825118 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:40:03 crc kubenswrapper[4767]: E1124 21:40:03.825432 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.842884 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48de0f8a-7fde-4bec-8374-73459b1c7d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.883948 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.897443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.897494 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.897511 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.897539 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.897556 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:03Z","lastTransitionTime":"2025-11-24T21:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.901207 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.915401 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.927366 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.944995 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.959582 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:03 crc kubenswrapper[4767]: I1124 21:40:03.976907 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.000717 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.000775 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.000792 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.000813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.000825 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.001832 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd58e38df01904cfa75b574ea3b10f6fe0d57b6c2295ca155f935ba4a9c25785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:32Z\\\",\\\"message\\\":\\\"-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 21:39:32.033369 6448 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1124 21:39:32.033729 6448 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 1.553767ms\\\\nF1124 21:39:32.033729 6448 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:40:03Z\\\",\\\"message\\\":\\\"in node crc\\\\nI1124 21:40:03.381663 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 21:40:03.381667 6848 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 21:40:03.381673 6848 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381677 6848 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381681 6848 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-74ffd in node crc\\\\nI1124 21:40:03.381686 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd after 0 failed attempt(s)\\\\nI1124 21:40:03.381689 6848 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nF1124 21:40:03.381692 6848 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.014171 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.026776 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.040409 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.055541 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.068118 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.083317 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.098588 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.102985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.103057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.103095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.103118 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.103130 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.112581 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.127930 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.206001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.206066 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.206087 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.206136 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.206155 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.309848 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.309921 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.309938 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.309964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.309981 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.313323 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.313360 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:04 crc kubenswrapper[4767]: E1124 21:40:04.313443 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:04 crc kubenswrapper[4767]: E1124 21:40:04.313507 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.413301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.413370 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.413390 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.413419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.413440 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.517196 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.517289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.517305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.517329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.517349 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.622312 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.622367 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.622381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.622405 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.622420 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.725078 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.725132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.725164 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.725186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.725201 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.828392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.828449 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.828468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.828491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.828508 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.830576 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/3.log" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.836408 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:40:04 crc kubenswrapper[4767]: E1124 21:40:04.836662 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.853926 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48de0f8a-7fde-4bec-8374-73459b1c7d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.873244 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.892115 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.911488 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.924927 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.931403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.931460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.931472 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.931491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.931505 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:04Z","lastTransitionTime":"2025-11-24T21:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.943225 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.963730 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:04 crc kubenswrapper[4767]: I1124 21:40:04.980774 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.006540 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:40:03Z\\\",\\\"message\\\":\\\"in node crc\\\\nI1124 21:40:03.381663 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 21:40:03.381667 6848 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 21:40:03.381673 6848 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381677 6848 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381681 6848 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-74ffd in node crc\\\\nI1124 21:40:03.381686 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd after 0 failed attempt(s)\\\\nI1124 21:40:03.381689 6848 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nF1124 21:40:03.381692 6848 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:40:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.021014 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.034141 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.034191 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.034202 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.034220 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.034234 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.039551 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.057220 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.072816 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.087711 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.103557 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.123180 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.136510 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.136560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.136577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.136602 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.136620 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.140672 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.163866 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.238987 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.239060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.239085 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.239115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.239137 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.312628 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.312635 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:05 crc kubenswrapper[4767]: E1124 21:40:05.312950 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:05 crc kubenswrapper[4767]: E1124 21:40:05.313018 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.343149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.343203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.343219 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.343242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.343257 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.446642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.446796 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.446865 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.446891 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.446917 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.550361 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.550463 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.550566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.550598 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.550622 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.653346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.653455 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.653483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.653512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.653534 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.756899 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.756931 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.756941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.756959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.756973 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.859527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.859593 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.859618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.859647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.859670 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.963001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.963071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.963088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.963116 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:05 crc kubenswrapper[4767]: I1124 21:40:05.963134 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:05Z","lastTransitionTime":"2025-11-24T21:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.065946 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.066004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.066027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.066045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.066057 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.168504 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.168548 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.168559 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.168574 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.168588 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.225365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.225445 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.225465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.225513 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.225533 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.248047 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.253409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.253459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.253473 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.253490 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.253504 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.273908 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.279205 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.279248 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.279259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.279308 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.279334 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.299285 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.303673 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.303712 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.303725 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.303746 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.303760 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.313041 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.313075 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.313234 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.313343 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.321604 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.325548 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.325588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.325605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.325624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.325639 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.338623 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:06 crc kubenswrapper[4767]: E1124 21:40:06.338830 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.340453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.340558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.340575 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.340592 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.340945 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.443561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.443619 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.443633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.443655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.443670 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.547027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.547071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.547085 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.547144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.547162 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.650381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.650448 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.650469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.650497 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.650520 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.753822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.753882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.753898 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.753923 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.753940 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.856698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.856769 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.856793 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.856820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.856843 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.960310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.960364 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.960375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.960392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:06 crc kubenswrapper[4767]: I1124 21:40:06.960403 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:06Z","lastTransitionTime":"2025-11-24T21:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.063850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.063965 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.063984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.064008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.064027 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.167104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.167144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.167153 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.167168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.167178 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.270327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.270372 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.270385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.270401 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.270415 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.313185 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.313190 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:07 crc kubenswrapper[4767]: E1124 21:40:07.313536 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:07 crc kubenswrapper[4767]: E1124 21:40:07.313399 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.374100 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.374164 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.374187 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.374216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.374240 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.477131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.477188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.477205 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.477228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.477245 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.580830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.580889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.580907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.580932 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.580954 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.684825 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.684881 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.684898 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.684925 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.684942 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.788025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.788106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.788131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.788161 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.788183 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.890777 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.890813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.890821 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.890838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.890848 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.993579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.993648 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.993667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.993694 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:07 crc kubenswrapper[4767]: I1124 21:40:07.993712 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:07Z","lastTransitionTime":"2025-11-24T21:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.096581 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.096648 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.096667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.096692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.096709 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.200027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.200089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.200106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.200132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.200149 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.303073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.303145 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.303163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.303189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.303209 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.312732 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.312775 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:08 crc kubenswrapper[4767]: E1124 21:40:08.312894 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:08 crc kubenswrapper[4767]: E1124 21:40:08.313250 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.335052 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.352337 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.370007 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.384904 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.397039 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.404964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.405010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.405022 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.405037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.405048 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.410313 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.429565 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.440944 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48de0f8a-7fde-4bec-8374-73459b1c7d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.453687 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.473518 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.492826 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.505978 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.507499 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.507550 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.507567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.507589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.507606 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.519766 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.530740 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.542597 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.564254 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:40:03Z\\\",\\\"message\\\":\\\"in node crc\\\\nI1124 21:40:03.381663 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 21:40:03.381667 6848 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 21:40:03.381673 6848 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381677 6848 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381681 6848 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-74ffd in node crc\\\\nI1124 21:40:03.381686 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd after 0 failed attempt(s)\\\\nI1124 21:40:03.381689 6848 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nF1124 21:40:03.381692 6848 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:40:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.576637 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.587803 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.613027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.613079 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.613095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.613120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.613137 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.716416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.716491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.716506 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.716561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.716579 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.819486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.819590 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.819612 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.819964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.820183 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.923304 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.923377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.923391 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.923409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:08 crc kubenswrapper[4767]: I1124 21:40:08.923424 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:08Z","lastTransitionTime":"2025-11-24T21:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.026259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.026322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.026331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.026345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.026661 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.129564 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.129603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.129611 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.129625 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.129636 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.232010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.232070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.232089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.232120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.232144 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.313047 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.313157 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:09 crc kubenswrapper[4767]: E1124 21:40:09.313172 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:09 crc kubenswrapper[4767]: E1124 21:40:09.313388 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.335781 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.335952 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.335963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.335979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.335990 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.439157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.439258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.439328 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.439411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.439433 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.542129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.542200 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.542218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.542241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.542259 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.644218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.645233 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.645296 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.645329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.645354 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.747516 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.747560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.747579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.747608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.747630 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.849960 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.849992 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.850003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.850017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.850027 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.952901 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.952947 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.952958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.952978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:09 crc kubenswrapper[4767]: I1124 21:40:09.952990 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:09Z","lastTransitionTime":"2025-11-24T21:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.055805 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.055834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.055844 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.055856 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.055865 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.158757 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.158801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.158811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.158825 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.158834 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.261203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.261256 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.261293 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.261315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.261336 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.313495 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.313580 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:10 crc kubenswrapper[4767]: E1124 21:40:10.313643 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:10 crc kubenswrapper[4767]: E1124 21:40:10.313886 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.365934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.366256 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.366425 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.366600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.366759 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.471140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.471180 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.471189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.471203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.471212 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.574115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.574181 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.574199 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.574225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.574243 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.677365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.677405 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.677413 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.677446 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.677456 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.780232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.780320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.780339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.780364 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.780382 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.883729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.883799 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.883820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.883851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.883875 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.987447 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.987551 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.987577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.987609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:10 crc kubenswrapper[4767]: I1124 21:40:10.987633 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:10Z","lastTransitionTime":"2025-11-24T21:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.090848 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.090899 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.090914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.090935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.090949 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.194363 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.194401 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.194409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.194424 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.194434 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.297872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.298131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.298147 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.298170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.298188 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.312769 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.312816 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:11 crc kubenswrapper[4767]: E1124 21:40:11.312972 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:11 crc kubenswrapper[4767]: E1124 21:40:11.313165 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.401177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.401231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.401253 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.401304 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.401319 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.503687 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.503781 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.503804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.503832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.503850 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.607864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.607934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.607956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.607989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.608011 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.710736 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.710801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.710824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.710855 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.710876 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.814240 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.814322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.814334 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.814352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.814364 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.917906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.917973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.917998 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.918031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:11 crc kubenswrapper[4767]: I1124 21:40:11.918055 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:11Z","lastTransitionTime":"2025-11-24T21:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.021409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.021470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.021486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.021512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.021536 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.125228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.125332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.125354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.125382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.125400 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.228758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.228821 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.228843 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.228869 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.228891 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.313213 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.313300 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:12 crc kubenswrapper[4767]: E1124 21:40:12.313443 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:12 crc kubenswrapper[4767]: E1124 21:40:12.313536 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.331467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.331551 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.331573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.331605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.331629 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.436327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.436387 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.436406 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.436431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.436449 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.539727 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.539827 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.540513 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.540547 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.540566 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.644102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.644211 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.644232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.644316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.644336 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.747138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.747211 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.747235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.747311 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.747335 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.850193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.850305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.850331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.850357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.850377 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.952327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.952352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.952360 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.952372 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:12 crc kubenswrapper[4767]: I1124 21:40:12.952382 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:12Z","lastTransitionTime":"2025-11-24T21:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.055214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.055255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.055289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.055308 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.055318 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.158111 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.158158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.158176 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.158197 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.158213 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.261165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.261204 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.261213 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.261228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.261238 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.313094 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.313097 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:13 crc kubenswrapper[4767]: E1124 21:40:13.313410 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:13 crc kubenswrapper[4767]: E1124 21:40:13.313243 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.363961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.364021 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.364037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.364059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.364076 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.467482 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.467540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.467556 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.467582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.467598 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.570611 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.570655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.570666 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.570682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.570694 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.673607 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.673672 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.673692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.673714 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.673733 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.776900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.776955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.776997 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.777029 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.777051 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.879787 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.879849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.879862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.879878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.879891 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.983546 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.983583 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.983591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.983605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:13 crc kubenswrapper[4767]: I1124 21:40:13.983615 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:13Z","lastTransitionTime":"2025-11-24T21:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.086820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.086884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.086907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.086935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.086953 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.189584 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.189623 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.189631 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.189645 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.189675 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.292398 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.292467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.292489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.292517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.292537 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.312808 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:14 crc kubenswrapper[4767]: E1124 21:40:14.313028 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.312815 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:14 crc kubenswrapper[4767]: E1124 21:40:14.313433 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.395210 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.395260 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.395285 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.395303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.395317 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.498385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.498447 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.498469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.498497 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.498530 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.600920 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.600983 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.601001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.601027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.601046 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.704167 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.704208 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.704222 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.704238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.704249 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.806492 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.807503 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.807526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.807555 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.807575 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.915951 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.916037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.916065 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.916094 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:14 crc kubenswrapper[4767]: I1124 21:40:14.916118 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:14Z","lastTransitionTime":"2025-11-24T21:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.019543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.019663 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.019702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.019735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.019755 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.123155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.123224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.123240 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.123307 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.123333 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.226004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.226063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.226073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.226103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.226114 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.312597 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.312678 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:15 crc kubenswrapper[4767]: E1124 21:40:15.312775 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:15 crc kubenswrapper[4767]: E1124 21:40:15.312913 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.328704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.328768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.328794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.328824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.328850 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.431831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.431911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.431934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.431964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.431986 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.535032 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.535105 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.535128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.535163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.535186 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.638308 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.638359 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.638375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.638399 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.638416 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.741392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.741459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.741482 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.741510 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.741529 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.844953 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.845005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.845019 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.845039 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.845055 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.947587 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.947676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.947700 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.947728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:15 crc kubenswrapper[4767]: I1124 21:40:15.947746 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:15Z","lastTransitionTime":"2025-11-24T21:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.049752 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.049831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.049857 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.049890 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.049913 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.153955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.154070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.154093 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.154118 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.154137 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.256600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.256658 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.256678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.256702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.256722 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.313166 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.313229 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.313420 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.313547 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.359339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.359408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.359427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.359455 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.359473 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.375084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.375140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.375155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.375175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.375189 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.393130 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.398119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.398163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.398174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.398192 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.398206 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.414440 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.418415 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.418472 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.418490 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.418515 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.418532 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.437809 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.442322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.442379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.442396 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.442420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.442438 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.459807 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.463461 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.463518 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.463536 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.463565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.463584 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.480807 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:40:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7cfbd01d-abd4-4a8c-9957-ee552fd378d0\\\",\\\"systemUUID\\\":\\\"575c8020-5419-4b9b-904a-464e70414810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:16 crc kubenswrapper[4767]: E1124 21:40:16.481025 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.482918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.482987 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.483011 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.483040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.483059 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.586500 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.586573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.586599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.586630 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.586655 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.689261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.689315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.689327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.689342 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.689350 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.792705 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.792775 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.792798 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.792828 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.792854 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.895839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.895911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.895932 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.895963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.895988 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.999185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.999248 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.999298 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.999326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:16 crc kubenswrapper[4767]: I1124 21:40:16.999343 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:16Z","lastTransitionTime":"2025-11-24T21:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.102209 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.102357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.102381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.102411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.102428 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.205544 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.205678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.205709 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.205737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.205758 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.308431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.308488 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.308509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.308543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.308563 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.312243 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.312345 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:17 crc kubenswrapper[4767]: E1124 21:40:17.312430 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:17 crc kubenswrapper[4767]: E1124 21:40:17.312543 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.411981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.412044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.412056 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.412076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.412085 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.515885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.515949 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.515975 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.516007 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.516027 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.619007 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.619093 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.619130 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.619166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.619188 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.721773 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.721818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.721833 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.721854 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.721891 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.824864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.824929 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.824941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.824960 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.824974 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.929090 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.929162 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.929179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.929203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:17 crc kubenswrapper[4767]: I1124 21:40:17.929221 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:17Z","lastTransitionTime":"2025-11-24T21:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.032355 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.032431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.032456 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.032488 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.032511 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.134991 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.135070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.135093 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.135123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.135154 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.237681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.237844 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.237880 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.237908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.237929 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.312733 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:18 crc kubenswrapper[4767]: E1124 21:40:18.312963 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.313044 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:18 crc kubenswrapper[4767]: E1124 21:40:18.314847 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.332916 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b360dd484190f8286c595d7e0f9232f8c1815765ed75524b470a3fddeffbe737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72ppr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-74ffd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.341288 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.341389 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.341408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.341450 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.341464 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.352525 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gnz8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45850ec-6094-4a27-aa04-a35c002e6160\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:39:51Z\\\",\\\"message\\\":\\\"2025-11-24T21:39:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a\\\\n2025-11-24T21:39:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfaa0b2f-d033-4b46-bc5b-ec2cc6eedd8a to /host/opt/cni/bin/\\\\n2025-11-24T21:39:06Z [verbose] multus-daemon started\\\\n2025-11-24T21:39:06Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtxpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gnz8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.377962 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f27727-62e4-4386-a459-b26e471e1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:40:03Z\\\",\\\"message\\\":\\\"in node crc\\\\nI1124 21:40:03.381663 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 21:40:03.381667 6848 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 21:40:03.381673 6848 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381677 6848 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nI1124 21:40:03.381681 6848 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-74ffd in node crc\\\\nI1124 21:40:03.381686 6848 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-74ffd after 0 failed attempt(s)\\\\nI1124 21:40:03.381689 6848 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-74ffd\\\\nF1124 21:40:03.381692 6848 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:40:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ll767\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.392558 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzmh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a611583d-9542-4d80-9e88-391ee935b033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87581a5015f8bb2af9385400b745c41964ad69ef2109a4e06437b013a379b58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jl8vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzmh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.410343 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3c69a6-6755-47bf-8e68-d70004d77621\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9b9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q9q7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.429677 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d01e78d8-05c0-42b3-bf71-f84e9dcdceca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4805cd29972e152e1fc9be947714f4f0eac609164b9b29bbd74f4f55aa898116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://445c2d57a4d4ac5019c9b71e4de29dcc96e7b05d89aae9d4279182687db6285e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e14608ad764f4eba0d9d3eb7fab8a1dd326d60015c3c31db080e1f4863e569a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f526b4bb0643eb8431dd9ec023b10a84dca62de0152efb0b56a2037adb80d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca5ba6c8ff40354ec4b03e20c04891ddb1051c5dc47167243a54e82817f7f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"W1124 21:38:42.669754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:38:42.670359 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764020322 cert, and key in /tmp/serving-cert-2320026020/serving-signer.crt, /tmp/serving-cert-2320026020/serving-signer.key\\\\nI1124 21:38:43.108517 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:38:57.892998 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 21:38:57.893314 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:38:57.894800 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2320026020/tls.crt::/tmp/serving-cert-2320026020/tls.key\\\\\\\"\\\\nI1124 21:38:58.212063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 21:38:58.214231 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 21:38:58.214254 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 21:38:58.214298 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 21:38:58.214304 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1124 21:38:58.221073 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9b2631d14f847b1f92cb92b88f461d2b61473abbfe643574a5f29be82673a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6a1444714f3e76f77de4dd1f2a2aa2a5ab524def13dea022171675d73e1cb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.445558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.445628 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.445653 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.445685 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.445709 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.451673 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d947680936bc4324d548e1b52ca80afe2917b603e574d9e8d3d99347019eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a91609a8123e77dbbd97c22101ada4c8ce63ef6a8e65e0766693a2c40dd1ce33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.472465 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad9f7d19-6d97-44a3-8918-41ba5bc39ef3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999c76d2a67cef87a48d02540b8e3d1086304227aed6bfd8a1ed484b8a693d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14fa4a857366c1051914a7a069b1adce88cafe19a28812cb44de6afd9bae1633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwwqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8thvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.489820 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab15e7-d5c6-48e6-ba99-721f916c65ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://207cea1d9ca8d41df6953dfe85c463c241ce72f141d2d6cd8222cd6ba01ea4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://919cbb9af5395a7f937ed8ecb134229552f9b6c90ec4146c439d7f52db65750d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12c906ba4356ce017882a7cddd4422fb85b0a189007538938589a5f5c2a078e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7f2155bb17d0db29d826cb09237d6d3a2aa228263c4e856c6b43161032b84e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.507883 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.524765 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2baae3176d3b3e2d61e4cd88c1783206af65efdd8e057084d02f1205f44636a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.544657 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.548306 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.548345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.548386 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.548404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.548415 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.568348 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49c8848-a5f0-4e10-b053-8048beeaad5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f84b97f685585337f58c5ab1dae38b15f19064b49f1ad0783dee729eb2a4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfceee0766e69affcc01ea37745b99fe8fe75ba81411ed0d1e0bd36fefe1f53f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c5c130b0a49ef8d66688d90e8fe35ba48cef346dbcbaccd56d640c671ac73d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36add20cef54b4e37260072eef9f72eed7960e0d17026a55f92766942c0b9b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b6bddcfb79a45c3deca7ac1372dbc74e8fb076b806d575b4b4c37680ff20bd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e74ddc67b870726bbee0220341cd652d79eef7f9f3cf2ffcdeb5683699322b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673ce04e7bba9549c9963175c924440056d4985afd21d0b4d93edfc2da2320fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m56s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwpfp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.582120 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5fca095-78a9-4669-9753-8c02bce14697\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cdc4afa8ccb49b23c09092c09ab0b1ba0f03d45f78aec7456fc95ad73accb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797addb544bfdd1434cf10c7380a65f58b3f44a2fe74ee493d4e2b727eddbba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aef520bfa426b91169396300d5eef5b83031067c7b6b1267b1133215111bf484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab32c61829fe2fc35cf839e22919a60a2684509683188bbe054a6ce327d58884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.598352 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a005e0bc0f3c7c8a4bd089fae93601588ecce882881fc75141ff2a7e32d78b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.615070 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.629097 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2p8zc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4396a62d-6ac4-4999-9bbb-e14f20a5a9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1da3e95d19c054516f0c68d6777f97ab2bd784a61199f981b4790887317ec4a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp5s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:39:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2p8zc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.642897 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48de0f8a-7fde-4bec-8374-73459b1c7d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95ef545043d6d5f94fe8d953f6e2662eae3be156c562322770edfc3488fc0a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cc1fb7002640745b3915951c50b0c973b3a49aa8a1983bef96d4fd92268a14e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:38:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:38:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:38:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:40:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.650715 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.650769 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.650785 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.650804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.650817 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.753667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.753943 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.754012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.754076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.754148 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.856637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.856666 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.856675 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.856687 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.856696 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.958732 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.958812 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.958837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.958867 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:18 crc kubenswrapper[4767]: I1124 21:40:18.958885 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:18Z","lastTransitionTime":"2025-11-24T21:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.061962 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.062054 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.062078 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.062108 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.062131 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.164782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.165136 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.165383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.165613 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.165781 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.268077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.268597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.268798 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.269027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.269300 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.312920 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:19 crc kubenswrapper[4767]: E1124 21:40:19.313078 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.313453 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:19 crc kubenswrapper[4767]: E1124 21:40:19.313757 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.371955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.371999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.372010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.372027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.372038 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.474165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.474231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.474249 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.474309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.474328 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.577838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.577994 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.578014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.578039 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.578061 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.680635 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.680692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.680709 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.680731 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.680749 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.783681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.783757 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.783776 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.783801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.783819 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.886193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.886265 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.886297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.886317 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.886350 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.989952 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.990012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.990030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.990060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:19 crc kubenswrapper[4767]: I1124 21:40:19.990078 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:19Z","lastTransitionTime":"2025-11-24T21:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.092814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.093128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.093148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.093174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.093192 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.195872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.195947 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.195964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.196073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.196107 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.299959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.300048 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.300071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.300103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.300125 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.312772 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.312810 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:20 crc kubenswrapper[4767]: E1124 21:40:20.312949 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:20 crc kubenswrapper[4767]: E1124 21:40:20.313443 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.314176 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:40:20 crc kubenswrapper[4767]: E1124 21:40:20.314423 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.409642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.409694 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.411137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.411179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.411195 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.514483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.514642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.514667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.514692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.514710 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.619073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.619148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.619171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.619200 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.619221 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.722493 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.722532 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.722539 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.722553 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.722562 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.825137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.825186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.825204 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.825227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.825242 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.928620 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.928663 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.928677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.928723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:20 crc kubenswrapper[4767]: I1124 21:40:20.928737 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:20Z","lastTransitionTime":"2025-11-24T21:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.032944 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.033025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.033036 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.033062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.033078 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.137301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.137394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.137409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.137441 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.137458 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.240893 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.240955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.240973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.241001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.241032 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.312676 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.312807 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:21 crc kubenswrapper[4767]: E1124 21:40:21.312996 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:21 crc kubenswrapper[4767]: E1124 21:40:21.313578 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.343774 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.343830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.343852 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.343881 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.343907 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.446563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.446639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.446661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.446693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.446714 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.508488 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:21 crc kubenswrapper[4767]: E1124 21:40:21.508627 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:40:21 crc kubenswrapper[4767]: E1124 21:40:21.508685 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs podName:3b3c69a6-6755-47bf-8e68-d70004d77621 nodeName:}" failed. No retries permitted until 2025-11-24 21:41:25.508669339 +0000 UTC m=+168.425652721 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs") pod "network-metrics-daemon-q9q7p" (UID: "3b3c69a6-6755-47bf-8e68-d70004d77621") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.549397 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.549460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.549478 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.549502 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.549521 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.652645 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.652716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.652735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.652763 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.652845 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.756403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.756476 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.756493 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.756518 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.756536 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.859727 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.859873 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.859899 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.859927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.859949 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.962632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.962682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.962693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.962709 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:21 crc kubenswrapper[4767]: I1124 21:40:21.962721 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:21Z","lastTransitionTime":"2025-11-24T21:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.065381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.065411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.065419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.065431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.065439 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.169185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.169261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.169316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.169346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.169369 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.272226 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.272324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.272343 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.272369 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.272385 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.313144 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.313189 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:22 crc kubenswrapper[4767]: E1124 21:40:22.313453 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:22 crc kubenswrapper[4767]: E1124 21:40:22.313634 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.374941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.375009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.375026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.375047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.375063 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.477786 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.477838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.477854 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.477873 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.477888 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.580926 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.580984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.581001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.581031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.581048 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.683748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.683796 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.683813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.683831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.683842 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.787077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.787152 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.787176 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.787206 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.787229 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.890617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.890712 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.890731 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.890808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.890889 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.993803 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.993862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.993882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.993909 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:22 crc kubenswrapper[4767]: I1124 21:40:22.993927 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:22Z","lastTransitionTime":"2025-11-24T21:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.096461 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.096511 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.096527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.096549 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.096565 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.200070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.200136 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.200158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.200188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.200212 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.302911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.302986 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.303004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.303032 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.303051 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.313194 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.313212 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:23 crc kubenswrapper[4767]: E1124 21:40:23.313402 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:23 crc kubenswrapper[4767]: E1124 21:40:23.313581 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.406859 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.406907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.406923 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.406946 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.406962 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.510225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.510313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.510332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.510357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.510376 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.613390 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.613447 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.613460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.613481 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.613494 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.716565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.716617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.716632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.716656 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.716674 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.820238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.820350 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.820368 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.820394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.820411 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.923108 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.923167 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.923183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.923207 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:23 crc kubenswrapper[4767]: I1124 21:40:23.923225 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:23Z","lastTransitionTime":"2025-11-24T21:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.025868 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.025972 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.025994 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.026052 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.026070 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.128703 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.128740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.128749 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.128763 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.128774 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.231941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.231979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.231991 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.232008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.232020 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.313142 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:24 crc kubenswrapper[4767]: E1124 21:40:24.313318 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.313687 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:24 crc kubenswrapper[4767]: E1124 21:40:24.313910 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.333730 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.333775 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.333792 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.333813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.333833 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.437331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.437388 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.437427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.437462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.437490 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.540124 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.540160 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.540169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.540182 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.540191 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.643828 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.643958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.643984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.644015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.644038 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.747525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.747596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.747615 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.747641 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.747660 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.850140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.850195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.850211 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.850232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.850297 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.953415 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.953462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.953471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.953487 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:24 crc kubenswrapper[4767]: I1124 21:40:24.953496 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:24Z","lastTransitionTime":"2025-11-24T21:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.056185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.057203 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.057256 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.057314 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.057334 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.159745 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.159788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.159801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.159819 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.159831 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.261943 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.261976 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.261984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.261998 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.262008 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.312722 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.312778 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:25 crc kubenswrapper[4767]: E1124 21:40:25.312911 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:25 crc kubenswrapper[4767]: E1124 21:40:25.313165 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.364693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.364729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.364739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.364755 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.364766 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.467954 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.468024 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.468045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.468069 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.468087 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.571254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.571383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.571403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.571430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.571492 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.673783 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.673849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.673875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.673906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.673930 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.777407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.777514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.777531 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.777555 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.777568 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.880243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.880360 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.880377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.880397 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.880410 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.983013 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.983140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.983158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.983184 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:25 crc kubenswrapper[4767]: I1124 21:40:25.983201 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:25Z","lastTransitionTime":"2025-11-24T21:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.089841 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.089893 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.089906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.089927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.089952 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.192584 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.192632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.192645 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.192660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.192671 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.295179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.295242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.295302 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.295335 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.295359 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.312746 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:26 crc kubenswrapper[4767]: E1124 21:40:26.313156 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.316988 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:26 crc kubenswrapper[4767]: E1124 21:40:26.317171 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.329909 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.399047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.399102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.399121 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.399145 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.399165 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.502458 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.502510 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.502526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.502547 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.502565 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.605563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.605641 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.605657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.606232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.606554 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.710077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.710167 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.710205 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.710243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.710408 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.799401 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.799448 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.799459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.799476 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.799488 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.834027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.834099 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.834111 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.834127 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.834136 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:40:26Z","lastTransitionTime":"2025-11-24T21:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.871716 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58"] Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.872368 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.876298 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.876515 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.877092 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.877653 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.892979 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wzmh2" podStartSLOduration=84.892957048 podStartE2EDuration="1m24.892957048s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:26.892405312 +0000 UTC m=+109.809388764" watchObservedRunningTime="2025-11-24 21:40:26.892957048 +0000 UTC m=+109.809940430" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.896815 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/470aa753-32db-49bc-8aab-cc0c1e9648fa-service-ca\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.896915 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470aa753-32db-49bc-8aab-cc0c1e9648fa-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.896962 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/470aa753-32db-49bc-8aab-cc0c1e9648fa-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.896994 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470aa753-32db-49bc-8aab-cc0c1e9648fa-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.897039 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/470aa753-32db-49bc-8aab-cc0c1e9648fa-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.943938 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.943916396 podStartE2EDuration="1m28.943916396s" podCreationTimestamp="2025-11-24 21:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:26.928793507 +0000 UTC m=+109.845776889" watchObservedRunningTime="2025-11-24 21:40:26.943916396 +0000 UTC m=+109.860899778" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.958880 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podStartSLOduration=83.958856919 podStartE2EDuration="1m23.958856919s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:26.944671348 +0000 UTC m=+109.861654720" watchObservedRunningTime="2025-11-24 21:40:26.958856919 +0000 UTC m=+109.875840311" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.959352 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gnz8t" podStartSLOduration=83.959343353 podStartE2EDuration="1m23.959343353s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:26.958628562 +0000 UTC m=+109.875611934" watchObservedRunningTime="2025-11-24 21:40:26.959343353 +0000 UTC m=+109.876326745" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.997964 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470aa753-32db-49bc-8aab-cc0c1e9648fa-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.998013 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/470aa753-32db-49bc-8aab-cc0c1e9648fa-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.998028 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470aa753-32db-49bc-8aab-cc0c1e9648fa-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.998054 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/470aa753-32db-49bc-8aab-cc0c1e9648fa-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.998085 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/470aa753-32db-49bc-8aab-cc0c1e9648fa-service-ca\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.998147 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/470aa753-32db-49bc-8aab-cc0c1e9648fa-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.998176 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/470aa753-32db-49bc-8aab-cc0c1e9648fa-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:26 crc kubenswrapper[4767]: I1124 21:40:26.998864 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/470aa753-32db-49bc-8aab-cc0c1e9648fa-service-ca\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.011164 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470aa753-32db-49bc-8aab-cc0c1e9648fa-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.022969 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470aa753-32db-49bc-8aab-cc0c1e9648fa-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-knm58\" (UID: \"470aa753-32db-49bc-8aab-cc0c1e9648fa\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.027918 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.027904931 podStartE2EDuration="1m29.027904931s" podCreationTimestamp="2025-11-24 21:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.02716042 +0000 UTC m=+109.944143782" watchObservedRunningTime="2025-11-24 21:40:27.027904931 +0000 UTC m=+109.944888303" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.058493 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8thvg" podStartSLOduration=84.058471078 podStartE2EDuration="1m24.058471078s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.058235881 +0000 UTC m=+109.975219273" watchObservedRunningTime="2025-11-24 21:40:27.058471078 +0000 UTC m=+109.975454460" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.110039 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.110017683 podStartE2EDuration="1.110017683s" podCreationTimestamp="2025-11-24 21:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.109865468 +0000 UTC m=+110.026848840" watchObservedRunningTime="2025-11-24 21:40:27.110017683 +0000 UTC m=+110.027001065" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.110617 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mwpfp" podStartSLOduration=84.11060761 podStartE2EDuration="1m24.11060761s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.08749918 +0000 UTC m=+110.004482572" watchObservedRunningTime="2025-11-24 21:40:27.11060761 +0000 UTC m=+110.027591002" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.157069 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2p8zc" podStartSLOduration=85.157049827 podStartE2EDuration="1m25.157049827s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.156968154 +0000 UTC m=+110.073951526" watchObservedRunningTime="2025-11-24 21:40:27.157049827 +0000 UTC m=+110.074033189" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.177708 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.177691325 podStartE2EDuration="35.177691325s" podCreationTimestamp="2025-11-24 21:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.167187821 +0000 UTC m=+110.084171193" watchObservedRunningTime="2025-11-24 21:40:27.177691325 +0000 UTC m=+110.094674697" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.190015 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.189998892 podStartE2EDuration="57.189998892s" podCreationTimestamp="2025-11-24 21:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.17854697 +0000 UTC m=+110.095530342" watchObservedRunningTime="2025-11-24 21:40:27.189998892 +0000 UTC m=+110.106982264" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.198225 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.312846 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:27 crc kubenswrapper[4767]: E1124 21:40:27.313232 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.313254 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:27 crc kubenswrapper[4767]: E1124 21:40:27.313762 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.919422 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" event={"ID":"470aa753-32db-49bc-8aab-cc0c1e9648fa","Type":"ContainerStarted","Data":"d6b2a5e2bb9f3ede225680b3e6ee48c21270ceeb49977ff4e8e6c613052772d8"} Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.919803 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" event={"ID":"470aa753-32db-49bc-8aab-cc0c1e9648fa","Type":"ContainerStarted","Data":"83d1f9a026cc842551600445259e7a65a6b0a70652cc14dac1f4ca2ec6ad9d52"} Nov 24 21:40:27 crc kubenswrapper[4767]: I1124 21:40:27.942320 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-knm58" podStartSLOduration=85.942262779 podStartE2EDuration="1m25.942262779s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:27.941089165 +0000 UTC m=+110.858072627" watchObservedRunningTime="2025-11-24 21:40:27.942262779 +0000 UTC m=+110.859246191" Nov 24 21:40:28 crc kubenswrapper[4767]: I1124 21:40:28.312590 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:28 crc kubenswrapper[4767]: E1124 21:40:28.316695 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:28 crc kubenswrapper[4767]: I1124 21:40:28.316779 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:28 crc kubenswrapper[4767]: E1124 21:40:28.317337 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:29 crc kubenswrapper[4767]: I1124 21:40:29.312536 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:29 crc kubenswrapper[4767]: I1124 21:40:29.312683 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:29 crc kubenswrapper[4767]: E1124 21:40:29.312722 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:29 crc kubenswrapper[4767]: E1124 21:40:29.312926 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:30 crc kubenswrapper[4767]: I1124 21:40:30.312549 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:30 crc kubenswrapper[4767]: I1124 21:40:30.312571 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:30 crc kubenswrapper[4767]: E1124 21:40:30.313016 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:30 crc kubenswrapper[4767]: E1124 21:40:30.313109 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:31 crc kubenswrapper[4767]: I1124 21:40:31.313296 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:31 crc kubenswrapper[4767]: I1124 21:40:31.313400 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:31 crc kubenswrapper[4767]: E1124 21:40:31.313423 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:31 crc kubenswrapper[4767]: E1124 21:40:31.313593 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:32 crc kubenswrapper[4767]: I1124 21:40:32.312768 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:32 crc kubenswrapper[4767]: E1124 21:40:32.312863 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:32 crc kubenswrapper[4767]: I1124 21:40:32.312996 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:32 crc kubenswrapper[4767]: E1124 21:40:32.313180 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:33 crc kubenswrapper[4767]: I1124 21:40:33.313234 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:33 crc kubenswrapper[4767]: E1124 21:40:33.313441 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:33 crc kubenswrapper[4767]: I1124 21:40:33.313235 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:33 crc kubenswrapper[4767]: E1124 21:40:33.313808 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:34 crc kubenswrapper[4767]: I1124 21:40:34.312711 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:34 crc kubenswrapper[4767]: E1124 21:40:34.312943 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:34 crc kubenswrapper[4767]: I1124 21:40:34.313032 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:34 crc kubenswrapper[4767]: E1124 21:40:34.313227 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:34 crc kubenswrapper[4767]: I1124 21:40:34.314076 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:40:34 crc kubenswrapper[4767]: E1124 21:40:34.314259 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ll767_openshift-ovn-kubernetes(41f27727-62e4-4386-a459-b26e471e1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" Nov 24 21:40:35 crc kubenswrapper[4767]: I1124 21:40:35.312711 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:35 crc kubenswrapper[4767]: I1124 21:40:35.312744 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:35 crc kubenswrapper[4767]: E1124 21:40:35.312858 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:35 crc kubenswrapper[4767]: E1124 21:40:35.312994 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:36 crc kubenswrapper[4767]: I1124 21:40:36.313005 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:36 crc kubenswrapper[4767]: I1124 21:40:36.313026 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:36 crc kubenswrapper[4767]: E1124 21:40:36.313164 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:36 crc kubenswrapper[4767]: E1124 21:40:36.313370 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.312702 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.312708 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:37 crc kubenswrapper[4767]: E1124 21:40:37.312959 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:37 crc kubenswrapper[4767]: E1124 21:40:37.313193 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.956093 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/1.log" Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.956849 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/0.log" Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.956913 4767 generic.go:334] "Generic (PLEG): container finished" podID="f45850ec-6094-4a27-aa04-a35c002e6160" containerID="702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2" exitCode=1 Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.956964 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerDied","Data":"702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2"} Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.957015 4767 scope.go:117] "RemoveContainer" containerID="8657ab7388c8da0f2c0dca24a9f55c4d9aea4adcd7c4b13921d6caa862e48049" Nov 24 21:40:37 crc kubenswrapper[4767]: I1124 21:40:37.957630 4767 scope.go:117] "RemoveContainer" containerID="702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2" Nov 24 21:40:37 crc kubenswrapper[4767]: E1124 21:40:37.957873 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gnz8t_openshift-multus(f45850ec-6094-4a27-aa04-a35c002e6160)\"" pod="openshift-multus/multus-gnz8t" podUID="f45850ec-6094-4a27-aa04-a35c002e6160" Nov 24 21:40:38 crc kubenswrapper[4767]: E1124 21:40:38.295034 4767 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 21:40:38 crc kubenswrapper[4767]: I1124 21:40:38.312979 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:38 crc kubenswrapper[4767]: I1124 21:40:38.312996 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:38 crc kubenswrapper[4767]: E1124 21:40:38.314115 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:38 crc kubenswrapper[4767]: E1124 21:40:38.314360 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:38 crc kubenswrapper[4767]: E1124 21:40:38.492469 4767 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:40:38 crc kubenswrapper[4767]: I1124 21:40:38.963439 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/1.log" Nov 24 21:40:39 crc kubenswrapper[4767]: I1124 21:40:39.312932 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:39 crc kubenswrapper[4767]: I1124 21:40:39.312962 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:39 crc kubenswrapper[4767]: E1124 21:40:39.313132 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:39 crc kubenswrapper[4767]: E1124 21:40:39.313242 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:40 crc kubenswrapper[4767]: I1124 21:40:40.312984 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:40 crc kubenswrapper[4767]: I1124 21:40:40.313075 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:40 crc kubenswrapper[4767]: E1124 21:40:40.313264 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:40 crc kubenswrapper[4767]: E1124 21:40:40.313455 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:41 crc kubenswrapper[4767]: I1124 21:40:41.312647 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:41 crc kubenswrapper[4767]: I1124 21:40:41.312715 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:41 crc kubenswrapper[4767]: E1124 21:40:41.312847 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:41 crc kubenswrapper[4767]: E1124 21:40:41.312958 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:42 crc kubenswrapper[4767]: I1124 21:40:42.312667 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:42 crc kubenswrapper[4767]: I1124 21:40:42.312736 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:42 crc kubenswrapper[4767]: E1124 21:40:42.312831 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:42 crc kubenswrapper[4767]: E1124 21:40:42.312949 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:43 crc kubenswrapper[4767]: I1124 21:40:43.312381 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:43 crc kubenswrapper[4767]: E1124 21:40:43.312593 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:43 crc kubenswrapper[4767]: I1124 21:40:43.312411 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:43 crc kubenswrapper[4767]: E1124 21:40:43.312727 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:43 crc kubenswrapper[4767]: E1124 21:40:43.494436 4767 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:40:44 crc kubenswrapper[4767]: I1124 21:40:44.312765 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:44 crc kubenswrapper[4767]: E1124 21:40:44.312949 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:44 crc kubenswrapper[4767]: I1124 21:40:44.313049 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:44 crc kubenswrapper[4767]: E1124 21:40:44.313262 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:45 crc kubenswrapper[4767]: I1124 21:40:45.312962 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:45 crc kubenswrapper[4767]: I1124 21:40:45.313089 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:45 crc kubenswrapper[4767]: E1124 21:40:45.313181 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:45 crc kubenswrapper[4767]: E1124 21:40:45.313327 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:45 crc kubenswrapper[4767]: I1124 21:40:45.314557 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:40:45 crc kubenswrapper[4767]: I1124 21:40:45.990499 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/3.log" Nov 24 21:40:45 crc kubenswrapper[4767]: I1124 21:40:45.994550 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerStarted","Data":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} Nov 24 21:40:45 crc kubenswrapper[4767]: I1124 21:40:45.995176 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:40:46 crc kubenswrapper[4767]: I1124 21:40:46.022220 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podStartSLOduration=103.022202777 podStartE2EDuration="1m43.022202777s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:46.021698433 +0000 UTC m=+128.938681855" watchObservedRunningTime="2025-11-24 21:40:46.022202777 +0000 UTC m=+128.939186149" Nov 24 21:40:46 crc kubenswrapper[4767]: I1124 21:40:46.313134 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:46 crc kubenswrapper[4767]: I1124 21:40:46.313154 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:46 crc kubenswrapper[4767]: E1124 21:40:46.313292 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:46 crc kubenswrapper[4767]: E1124 21:40:46.313543 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:46 crc kubenswrapper[4767]: I1124 21:40:46.331889 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q9q7p"] Nov 24 21:40:46 crc kubenswrapper[4767]: I1124 21:40:46.332013 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:46 crc kubenswrapper[4767]: E1124 21:40:46.332107 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:47 crc kubenswrapper[4767]: I1124 21:40:47.313307 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:47 crc kubenswrapper[4767]: E1124 21:40:47.313905 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:48 crc kubenswrapper[4767]: I1124 21:40:48.313200 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:48 crc kubenswrapper[4767]: I1124 21:40:48.313298 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:48 crc kubenswrapper[4767]: E1124 21:40:48.315326 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:48 crc kubenswrapper[4767]: I1124 21:40:48.315361 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:48 crc kubenswrapper[4767]: E1124 21:40:48.315467 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:48 crc kubenswrapper[4767]: E1124 21:40:48.315576 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:48 crc kubenswrapper[4767]: E1124 21:40:48.495742 4767 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:40:49 crc kubenswrapper[4767]: I1124 21:40:49.312951 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:49 crc kubenswrapper[4767]: E1124 21:40:49.313099 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:50 crc kubenswrapper[4767]: I1124 21:40:50.312924 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:50 crc kubenswrapper[4767]: I1124 21:40:50.313056 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:50 crc kubenswrapper[4767]: I1124 21:40:50.313433 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:50 crc kubenswrapper[4767]: E1124 21:40:50.313604 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:50 crc kubenswrapper[4767]: E1124 21:40:50.313738 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:50 crc kubenswrapper[4767]: E1124 21:40:50.313858 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:51 crc kubenswrapper[4767]: I1124 21:40:51.313041 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:51 crc kubenswrapper[4767]: E1124 21:40:51.313497 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:52 crc kubenswrapper[4767]: I1124 21:40:52.312331 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:52 crc kubenswrapper[4767]: I1124 21:40:52.312376 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:52 crc kubenswrapper[4767]: I1124 21:40:52.312399 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:52 crc kubenswrapper[4767]: E1124 21:40:52.312526 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:52 crc kubenswrapper[4767]: E1124 21:40:52.312962 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:52 crc kubenswrapper[4767]: I1124 21:40:52.313065 4767 scope.go:117] "RemoveContainer" containerID="702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2" Nov 24 21:40:52 crc kubenswrapper[4767]: E1124 21:40:52.313131 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:53 crc kubenswrapper[4767]: I1124 21:40:53.030942 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/1.log" Nov 24 21:40:53 crc kubenswrapper[4767]: I1124 21:40:53.031042 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerStarted","Data":"c11a97772c03bf0d654128f5785bea0e4460acc7aefb2bed6c6a691b0be41a53"} Nov 24 21:40:53 crc kubenswrapper[4767]: I1124 21:40:53.312614 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:53 crc kubenswrapper[4767]: E1124 21:40:53.313018 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:53 crc kubenswrapper[4767]: E1124 21:40:53.497447 4767 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:40:54 crc kubenswrapper[4767]: I1124 21:40:54.312603 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:54 crc kubenswrapper[4767]: I1124 21:40:54.312692 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:54 crc kubenswrapper[4767]: E1124 21:40:54.313092 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:54 crc kubenswrapper[4767]: I1124 21:40:54.312735 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:54 crc kubenswrapper[4767]: E1124 21:40:54.313229 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:54 crc kubenswrapper[4767]: E1124 21:40:54.313422 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:55 crc kubenswrapper[4767]: I1124 21:40:55.313633 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:55 crc kubenswrapper[4767]: E1124 21:40:55.313956 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:56 crc kubenswrapper[4767]: I1124 21:40:56.312942 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:56 crc kubenswrapper[4767]: I1124 21:40:56.312944 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:56 crc kubenswrapper[4767]: E1124 21:40:56.313075 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:56 crc kubenswrapper[4767]: I1124 21:40:56.313150 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:56 crc kubenswrapper[4767]: E1124 21:40:56.313192 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:56 crc kubenswrapper[4767]: E1124 21:40:56.313377 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:57 crc kubenswrapper[4767]: I1124 21:40:57.312626 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:57 crc kubenswrapper[4767]: E1124 21:40:57.312778 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:40:58 crc kubenswrapper[4767]: I1124 21:40:58.313361 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:40:58 crc kubenswrapper[4767]: I1124 21:40:58.313389 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:40:58 crc kubenswrapper[4767]: I1124 21:40:58.313402 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:40:58 crc kubenswrapper[4767]: E1124 21:40:58.316189 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q9q7p" podUID="3b3c69a6-6755-47bf-8e68-d70004d77621" Nov 24 21:40:58 crc kubenswrapper[4767]: E1124 21:40:58.316250 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:40:58 crc kubenswrapper[4767]: E1124 21:40:58.316329 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:40:59 crc kubenswrapper[4767]: I1124 21:40:59.312990 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:40:59 crc kubenswrapper[4767]: I1124 21:40:59.315739 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 21:40:59 crc kubenswrapper[4767]: I1124 21:40:59.315844 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 21:41:00 crc kubenswrapper[4767]: I1124 21:41:00.312838 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:41:00 crc kubenswrapper[4767]: I1124 21:41:00.312918 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:41:00 crc kubenswrapper[4767]: I1124 21:41:00.313211 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:41:00 crc kubenswrapper[4767]: I1124 21:41:00.316825 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 21:41:00 crc kubenswrapper[4767]: I1124 21:41:00.317103 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 21:41:00 crc kubenswrapper[4767]: I1124 21:41:00.317263 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 21:41:00 crc kubenswrapper[4767]: I1124 21:41:00.321699 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 21:41:01 crc kubenswrapper[4767]: I1124 21:41:01.090894 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:41:05 crc kubenswrapper[4767]: I1124 21:41:05.481649 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:41:05 crc kubenswrapper[4767]: I1124 21:41:05.481752 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.140089 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:06 crc kubenswrapper[4767]: E1124 21:41:06.140463 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:43:08.140405813 +0000 UTC m=+271.057389225 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.241801 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.241912 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.241958 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.242006 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.243458 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.251914 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.251917 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.252500 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.336064 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.366737 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:41:06 crc kubenswrapper[4767]: I1124 21:41:06.534374 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:41:06 crc kubenswrapper[4767]: W1124 21:41:06.620487 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-74e3271f081c425b446a45052945176c5101351dbbecd1b6027b24682964e344 WatchSource:0}: Error finding container 74e3271f081c425b446a45052945176c5101351dbbecd1b6027b24682964e344: Status 404 returned error can't find the container with id 74e3271f081c425b446a45052945176c5101351dbbecd1b6027b24682964e344 Nov 24 21:41:06 crc kubenswrapper[4767]: W1124 21:41:06.638874 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-44ff8e38e20cbcd139edac42e42461af202f7eb051624767db6383a1bb35553d WatchSource:0}: Error finding container 44ff8e38e20cbcd139edac42e42461af202f7eb051624767db6383a1bb35553d: Status 404 returned error can't find the container with id 44ff8e38e20cbcd139edac42e42461af202f7eb051624767db6383a1bb35553d Nov 24 21:41:06 crc kubenswrapper[4767]: W1124 21:41:06.724238 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3119247c1117b6a21715d2444cef7a29af18b6ffe0574631c413b219b3bd071a WatchSource:0}: Error finding container 3119247c1117b6a21715d2444cef7a29af18b6ffe0574631c413b219b3bd071a: Status 404 returned error can't find the container with id 3119247c1117b6a21715d2444cef7a29af18b6ffe0574631c413b219b3bd071a Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.088317 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bca2ba4ad1c5e2620f171ec588b0b7d008e863199774ceeec48798a7ed8440ab"} Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.088428 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"44ff8e38e20cbcd139edac42e42461af202f7eb051624767db6383a1bb35553d"} Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.092399 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f2255a4538e92917b51f732c6cf6a924f851aafba8994071c24b3f6a87a8c627"} Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.092478 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"74e3271f081c425b446a45052945176c5101351dbbecd1b6027b24682964e344"} Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.093154 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.096491 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d52644ce6d1ae8300565ca819962c12244d811e38599096fd3361d70c98cc64"} Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.096545 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3119247c1117b6a21715d2444cef7a29af18b6ffe0574631c413b219b3bd071a"} Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.525241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.614495 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vdb2k"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.615551 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.620628 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.621163 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.623423 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.624363 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.625880 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.626143 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.627093 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.627316 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.627483 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.627641 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.628143 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.630003 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.631163 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.632454 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.633186 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4d8cc"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.633362 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.633592 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4d8cc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.639372 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.640487 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.640627 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.640834 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.640943 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.642807 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.642897 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.642954 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.643055 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.643169 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.643645 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.644731 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.644911 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.645063 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.645337 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.645529 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.646038 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.646308 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mp4ng"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.646901 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x76bn"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647441 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647507 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647585 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647515 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647571 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647624 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.648139 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647638 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647701 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.648304 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647713 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.648367 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.648429 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647752 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647764 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647778 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.647950 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.648986 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gmxsv"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.649056 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.649176 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.649960 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nlpg6"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.650188 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.650472 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.650526 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.651011 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.652317 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7dpz"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.653086 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.653442 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wrbrz"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.653953 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.655849 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.656050 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.668587 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h9pjp"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.669055 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.672389 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.672518 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.672671 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.672761 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.674348 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.674878 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mx2n5"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.674920 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.675415 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.690512 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.690939 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.691061 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.691967 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.692352 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.692873 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.693491 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.693759 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.695285 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.695903 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.696379 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.697729 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.698871 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.716057 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.716828 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.716936 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717010 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717192 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717197 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717365 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717454 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717464 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717526 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717568 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717718 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717781 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.717931 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718177 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718205 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718237 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718180 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718401 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718482 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718645 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718663 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718689 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718766 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718792 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718864 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718978 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.718996 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.719092 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.719194 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.719317 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.719419 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.719693 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.719746 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.721807 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.722245 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck7c4"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.724744 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.726828 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gc994"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.727133 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.727732 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.727753 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.727789 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.727937 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.728062 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.729797 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.729978 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.731369 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.732756 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.733237 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.738528 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.738629 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.738732 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.738990 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.743801 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.744127 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.744480 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.744793 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.745009 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.745488 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.746178 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.747203 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.748672 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.749100 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n8vvb"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.789206 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.790619 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.791048 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.792445 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.792448 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.792826 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793108 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793161 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-encryption-config\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793182 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58a04f0-dcce-4a15-9248-06fe40d8fceb-serving-cert\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793199 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/571ff756-8e0a-4959-9dcb-b2c9aff1e7c0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-f6j64\" (UID: \"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793232 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-config\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793249 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793294 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-service-ca-bundle\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793311 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d541163-dd1b-4486-9939-4eaa9ec350bf-auth-proxy-config\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793325 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c3520aa-b012-4e35-8336-6655ef28eae8-node-pullsecrets\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793339 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3520aa-b012-4e35-8336-6655ef28eae8-audit-dir\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793371 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqlp\" (UniqueName: \"kubernetes.io/projected/571ff756-8e0a-4959-9dcb-b2c9aff1e7c0-kube-api-access-lbqlp\") pod \"cluster-samples-operator-665b6dd947-f6j64\" (UID: \"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793386 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/18770b7d-cd23-4e8b-89e5-67986cfbad15-kube-api-access-slj22\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793406 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2kr\" (UniqueName: \"kubernetes.io/projected/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-kube-api-access-kc2kr\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793422 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-audit-policies\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793454 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793469 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793482 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793492 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793537 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7aaeee-4486-42e5-be43-cdc4d23aa445-config\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793553 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4651fee-37da-4038-895d-4b483d41240e-serving-cert\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793566 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793582 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d541163-dd1b-4486-9939-4eaa9ec350bf-machine-approver-tls\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793614 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-etcd-client\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793628 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwqw\" (UniqueName: \"kubernetes.io/projected/1e7aaeee-4486-42e5-be43-cdc4d23aa445-kube-api-access-9dwqw\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793644 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hzkz\" (UniqueName: \"kubernetes.io/projected/c4651fee-37da-4038-895d-4b483d41240e-kube-api-access-8hzkz\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793661 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793772 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e859ee-0d8c-48c7-8251-25c79a040f99-serving-cert\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793799 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-etcd-service-ca\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793818 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4651fee-37da-4038-895d-4b483d41240e-etcd-client\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793828 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793844 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a867a71a-121a-4f12-8c81-7b14f0a4fd16-proxy-tls\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793886 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9h8\" (UniqueName: \"kubernetes.io/projected/0d541163-dd1b-4486-9939-4eaa9ec350bf-kube-api-access-mn9h8\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793904 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp945\" (UniqueName: \"kubernetes.io/projected/86bad83e-cde9-43a8-803a-fda0e14ef559-kube-api-access-hp945\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793938 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4pp\" (UniqueName: \"kubernetes.io/projected/6b463b5d-b072-4032-aa46-9abe955f901b-kube-api-access-sz4pp\") pod \"downloads-7954f5f757-4d8cc\" (UID: \"6b463b5d-b072-4032-aa46-9abe955f901b\") " pod="openshift-console/downloads-7954f5f757-4d8cc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.793947 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794004 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18770b7d-cd23-4e8b-89e5-67986cfbad15-serving-cert\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794081 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794101 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794129 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7aaeee-4486-42e5-be43-cdc4d23aa445-serving-cert\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794149 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-audit\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794163 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794178 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794193 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-oauth-serving-cert\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794210 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l56bt\" (UniqueName: \"kubernetes.io/projected/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-kube-api-access-l56bt\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794226 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794241 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-oauth-config\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794278 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794294 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794312 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-serving-cert\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794328 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-service-ca\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794347 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd491358-4379-40eb-a9b1-285abcbeb89c-audit-dir\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794357 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794362 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72e9e13e-3775-4751-9b9c-466f114cff18-metrics-tls\") pod \"dns-operator-744455d44c-f7dpz\" (UID: \"72e9e13e-3775-4751-9b9c-466f114cff18\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794506 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794523 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgs4\" (UniqueName: \"kubernetes.io/projected/d5e859ee-0d8c-48c7-8251-25c79a040f99-kube-api-access-7kgs4\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794544 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-etcd-client\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794560 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgjzc\" (UniqueName: \"kubernetes.io/projected/d58a04f0-dcce-4a15-9248-06fe40d8fceb-kube-api-access-fgjzc\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794579 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a867a71a-121a-4f12-8c81-7b14f0a4fd16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794594 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794614 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794629 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-config\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794647 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9wz\" (UniqueName: \"kubernetes.io/projected/311b014f-099c-4f63-a46e-ccf2684847db-kube-api-access-zm9wz\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794662 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e7aaeee-4486-42e5-be43-cdc4d23aa445-trusted-ca\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794681 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794713 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794719 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-image-import-ca\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794750 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-audit-policies\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794764 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-config\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794779 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794812 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-etcd-ca\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794846 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-config\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794864 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-trusted-ca-bundle\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794880 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/311b014f-099c-4f63-a46e-ccf2684847db-audit-dir\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794894 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794916 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6sq\" (UniqueName: \"kubernetes.io/projected/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-kube-api-access-9n6sq\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794933 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794960 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-config\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794976 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.794990 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-etcd-serving-ca\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795009 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795028 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv4t2\" (UniqueName: \"kubernetes.io/projected/cd491358-4379-40eb-a9b1-285abcbeb89c-kube-api-access-zv4t2\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795042 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-client-ca\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795060 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-client-ca\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795074 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87p4s\" (UniqueName: \"kubernetes.io/projected/72e9e13e-3775-4751-9b9c-466f114cff18-kube-api-access-87p4s\") pod \"dns-operator-744455d44c-f7dpz\" (UID: \"72e9e13e-3775-4751-9b9c-466f114cff18\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795094 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-serving-cert\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795166 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795181 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn6x7\" (UniqueName: \"kubernetes.io/projected/6c3520aa-b012-4e35-8336-6655ef28eae8-kube-api-access-vn6x7\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795197 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d541163-dd1b-4486-9939-4eaa9ec350bf-config\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795213 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85d5\" (UniqueName: \"kubernetes.io/projected/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-kube-api-access-g85d5\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795229 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d5e859ee-0d8c-48c7-8251-25c79a040f99-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795247 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-serving-cert\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795215 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795345 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-encryption-config\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795412 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-serving-cert\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795432 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795450 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-config\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795473 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-console-config\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.795497 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vws5j\" (UniqueName: \"kubernetes.io/projected/a867a71a-121a-4f12-8c81-7b14f0a4fd16-kube-api-access-vws5j\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.796294 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vdb2k"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.797155 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.797616 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.798730 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.799321 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.805477 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.805555 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.805897 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.806245 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.806411 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.808083 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.808250 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.809292 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.810219 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.811133 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4d8cc"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.812067 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mx2n5"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.812967 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-29qkn"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.813444 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.813990 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.814478 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.814873 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fk657"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.815993 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.817474 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.819578 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.820036 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.820599 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpc7v"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.821936 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.822016 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.823396 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.825350 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.827477 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.828475 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mp4ng"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.829516 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h9pjp"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.830530 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7dpz"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.831764 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.832847 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.833835 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n8vvb"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.835070 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nlpg6"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.836502 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gmxsv"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.837991 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.838112 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x76bn"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.839031 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck7c4"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.839841 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.840790 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.841773 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6fxbb"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.842421 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.842887 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qcv2q"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.844260 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.844374 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.845998 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.854919 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.857231 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.859513 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wrbrz"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.862049 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.864935 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qcv2q"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.867239 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.868230 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6fxbb"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.869282 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.870805 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.871626 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.872648 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.879404 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpc7v"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.881537 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-29qkn"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.881627 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.881659 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qdgxl"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.882673 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.882913 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qdgxl"] Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896061 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896092 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896110 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-serving-cert\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896125 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-oauth-config\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896142 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-service-ca\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896156 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd491358-4379-40eb-a9b1-285abcbeb89c-audit-dir\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896171 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72e9e13e-3775-4751-9b9c-466f114cff18-metrics-tls\") pod \"dns-operator-744455d44c-f7dpz\" (UID: \"72e9e13e-3775-4751-9b9c-466f114cff18\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896188 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896207 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgs4\" (UniqueName: \"kubernetes.io/projected/d5e859ee-0d8c-48c7-8251-25c79a040f99-kube-api-access-7kgs4\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896222 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-etcd-client\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896237 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgjzc\" (UniqueName: \"kubernetes.io/projected/d58a04f0-dcce-4a15-9248-06fe40d8fceb-kube-api-access-fgjzc\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896254 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd491358-4379-40eb-a9b1-285abcbeb89c-audit-dir\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896253 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a867a71a-121a-4f12-8c81-7b14f0a4fd16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896324 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.896355 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-config\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897021 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897254 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-service-ca\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897372 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897399 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9wz\" (UniqueName: \"kubernetes.io/projected/311b014f-099c-4f63-a46e-ccf2684847db-kube-api-access-zm9wz\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897433 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897454 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e7aaeee-4486-42e5-be43-cdc4d23aa445-trusted-ca\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897507 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a867a71a-121a-4f12-8c81-7b14f0a4fd16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897514 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897584 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-image-import-ca\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897594 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897619 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-audit-policies\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897651 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-config\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897685 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-etcd-ca\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897717 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897746 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-config\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897768 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-trusted-ca-bundle\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897793 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/311b014f-099c-4f63-a46e-ccf2684847db-audit-dir\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897816 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897842 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6sq\" (UniqueName: \"kubernetes.io/projected/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-kube-api-access-9n6sq\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897868 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897892 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-config\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897907 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/311b014f-099c-4f63-a46e-ccf2684847db-audit-dir\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897916 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897954 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.897982 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv4t2\" (UniqueName: \"kubernetes.io/projected/cd491358-4379-40eb-a9b1-285abcbeb89c-kube-api-access-zv4t2\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898005 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-client-ca\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898032 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-etcd-serving-ca\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898054 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-client-ca\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898075 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87p4s\" (UniqueName: \"kubernetes.io/projected/72e9e13e-3775-4751-9b9c-466f114cff18-kube-api-access-87p4s\") pod \"dns-operator-744455d44c-f7dpz\" (UID: \"72e9e13e-3775-4751-9b9c-466f114cff18\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898099 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-serving-cert\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898120 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898146 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d541163-dd1b-4486-9939-4eaa9ec350bf-config\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898163 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85d5\" (UniqueName: \"kubernetes.io/projected/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-kube-api-access-g85d5\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898180 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d5e859ee-0d8c-48c7-8251-25c79a040f99-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898196 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-serving-cert\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898210 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn6x7\" (UniqueName: \"kubernetes.io/projected/6c3520aa-b012-4e35-8336-6655ef28eae8-kube-api-access-vn6x7\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898228 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-encryption-config\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898244 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-serving-cert\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898259 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898292 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-config\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898310 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-console-config\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898325 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vws5j\" (UniqueName: \"kubernetes.io/projected/a867a71a-121a-4f12-8c81-7b14f0a4fd16-kube-api-access-vws5j\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898351 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898367 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-encryption-config\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898385 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58a04f0-dcce-4a15-9248-06fe40d8fceb-serving-cert\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898404 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/571ff756-8e0a-4959-9dcb-b2c9aff1e7c0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-f6j64\" (UID: \"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898422 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-config\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898442 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898458 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-service-ca-bundle\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898500 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d541163-dd1b-4486-9939-4eaa9ec350bf-auth-proxy-config\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898516 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c3520aa-b012-4e35-8336-6655ef28eae8-node-pullsecrets\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898530 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3520aa-b012-4e35-8336-6655ef28eae8-audit-dir\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898545 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbqlp\" (UniqueName: \"kubernetes.io/projected/571ff756-8e0a-4959-9dcb-b2c9aff1e7c0-kube-api-access-lbqlp\") pod \"cluster-samples-operator-665b6dd947-f6j64\" (UID: \"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898551 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898563 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2kr\" (UniqueName: \"kubernetes.io/projected/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-kube-api-access-kc2kr\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898595 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-audit-policies\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898622 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898646 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898711 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/18770b7d-cd23-4e8b-89e5-67986cfbad15-kube-api-access-slj22\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898735 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898757 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7aaeee-4486-42e5-be43-cdc4d23aa445-config\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898781 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4651fee-37da-4038-895d-4b483d41240e-serving-cert\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898804 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898828 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d541163-dd1b-4486-9939-4eaa9ec350bf-machine-approver-tls\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898840 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e7aaeee-4486-42e5-be43-cdc4d23aa445-trusted-ca\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898853 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-etcd-client\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898871 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898878 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwqw\" (UniqueName: \"kubernetes.io/projected/1e7aaeee-4486-42e5-be43-cdc4d23aa445-kube-api-access-9dwqw\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.898903 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hzkz\" (UniqueName: \"kubernetes.io/projected/c4651fee-37da-4038-895d-4b483d41240e-kube-api-access-8hzkz\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.899283 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-config\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.899373 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-audit-policies\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.899530 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cd491358-4379-40eb-a9b1-285abcbeb89c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.899931 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-service-ca-bundle\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.899934 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-image-import-ca\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.899935 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-etcd-serving-ca\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.899975 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3520aa-b012-4e35-8336-6655ef28eae8-audit-dir\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900143 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900164 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d541163-dd1b-4486-9939-4eaa9ec350bf-auth-proxy-config\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900194 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900232 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e859ee-0d8c-48c7-8251-25c79a040f99-serving-cert\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900261 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-etcd-service-ca\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900303 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4651fee-37da-4038-895d-4b483d41240e-etcd-client\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900328 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a867a71a-121a-4f12-8c81-7b14f0a4fd16-proxy-tls\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900357 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9h8\" (UniqueName: \"kubernetes.io/projected/0d541163-dd1b-4486-9939-4eaa9ec350bf-kube-api-access-mn9h8\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900384 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp945\" (UniqueName: \"kubernetes.io/projected/86bad83e-cde9-43a8-803a-fda0e14ef559-kube-api-access-hp945\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900399 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d541163-dd1b-4486-9939-4eaa9ec350bf-config\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900411 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4pp\" (UniqueName: \"kubernetes.io/projected/6b463b5d-b072-4032-aa46-9abe955f901b-kube-api-access-sz4pp\") pod \"downloads-7954f5f757-4d8cc\" (UID: \"6b463b5d-b072-4032-aa46-9abe955f901b\") " pod="openshift-console/downloads-7954f5f757-4d8cc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900420 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c3520aa-b012-4e35-8336-6655ef28eae8-node-pullsecrets\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900444 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900471 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900494 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18770b7d-cd23-4e8b-89e5-67986cfbad15-serving-cert\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.900522 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-audit-policies\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.901419 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-client-ca\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.901644 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-config\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.901745 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d5e859ee-0d8c-48c7-8251-25c79a040f99-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.901769 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.902105 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-trusted-ca-bundle\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.902429 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-config\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.902520 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.902553 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-config\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.902851 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.903116 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-etcd-service-ca\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.903258 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.903671 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58a04f0-dcce-4a15-9248-06fe40d8fceb-serving-cert\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.903777 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-console-config\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.904602 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72e9e13e-3775-4751-9b9c-466f114cff18-metrics-tls\") pod \"dns-operator-744455d44c-f7dpz\" (UID: \"72e9e13e-3775-4751-9b9c-466f114cff18\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.904667 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18770b7d-cd23-4e8b-89e5-67986cfbad15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.904730 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.905170 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.905316 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7aaeee-4486-42e5-be43-cdc4d23aa445-config\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.905367 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-client-ca\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.905813 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.906366 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.906425 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-config\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.907002 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/571ff756-8e0a-4959-9dcb-b2c9aff1e7c0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-f6j64\" (UID: \"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.907450 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-encryption-config\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.907586 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-serving-cert\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.907775 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4651fee-37da-4038-895d-4b483d41240e-etcd-client\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908186 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-serving-cert\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908387 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e859ee-0d8c-48c7-8251-25c79a040f99-serving-cert\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908735 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7aaeee-4486-42e5-be43-cdc4d23aa445-serving-cert\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908770 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-audit\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908790 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908807 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908830 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-oauth-serving-cert\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908855 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l56bt\" (UniqueName: \"kubernetes.io/projected/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-kube-api-access-l56bt\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.908878 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.909781 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.910060 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-oauth-serving-cert\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.910277 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-encryption-config\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.910279 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c3520aa-b012-4e35-8336-6655ef28eae8-audit\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.910782 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.911752 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.911931 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-serving-cert\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.912288 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0d541163-dd1b-4486-9939-4eaa9ec350bf-machine-approver-tls\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.912345 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-serving-cert\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.912402 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.912429 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-oauth-config\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.912814 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd491358-4379-40eb-a9b1-285abcbeb89c-etcd-client\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.912981 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7aaeee-4486-42e5-be43-cdc4d23aa445-serving-cert\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.912982 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4651fee-37da-4038-895d-4b483d41240e-serving-cert\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.913408 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18770b7d-cd23-4e8b-89e5-67986cfbad15-serving-cert\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.913589 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.914914 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.915314 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.916982 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c3520aa-b012-4e35-8336-6655ef28eae8-etcd-client\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.917547 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.938146 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.957369 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.979730 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.986707 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c4651fee-37da-4038-895d-4b483d41240e-etcd-ca\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:07 crc kubenswrapper[4767]: I1124 21:41:07.998444 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.017711 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.037600 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.043480 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.057842 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.059797 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-config\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.098016 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.109320 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a867a71a-121a-4f12-8c81-7b14f0a4fd16-proxy-tls\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.117893 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.138510 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.157921 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.186021 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.199013 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.218961 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.240172 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.261618 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.277559 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.298667 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.319651 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.338446 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.358663 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.378780 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.398136 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.418505 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.439386 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.458985 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.478532 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.499229 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.518015 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.538352 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.559125 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.578358 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.598170 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.618446 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.638138 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.658114 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.699165 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.718688 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.738825 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.767772 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.778456 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.796530 4767 request.go:700] Waited for 1.002001227s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&limit=500&resourceVersion=0 Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.798890 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.819026 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.837999 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.859650 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.879245 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.899011 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.918620 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.938407 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.959087 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.978513 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 21:41:08 crc kubenswrapper[4767]: I1124 21:41:08.999654 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.019669 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.040122 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.060186 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.078588 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.098461 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.119740 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.139264 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.158918 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.178777 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.198948 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.218240 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.239795 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.258817 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.278084 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.298219 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.318601 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.339051 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.358921 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.378735 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.411943 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.419063 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.447241 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.459394 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.478162 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.498518 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.518538 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.538645 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.558806 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.578512 4767 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.598469 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.618707 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.641417 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.658199 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.693101 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgs4\" (UniqueName: \"kubernetes.io/projected/d5e859ee-0d8c-48c7-8251-25c79a040f99-kube-api-access-7kgs4\") pod \"openshift-config-operator-7777fb866f-nf9x2\" (UID: \"d5e859ee-0d8c-48c7-8251-25c79a040f99\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.728939 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgjzc\" (UniqueName: \"kubernetes.io/projected/d58a04f0-dcce-4a15-9248-06fe40d8fceb-kube-api-access-fgjzc\") pod \"route-controller-manager-6576b87f9c-hwxt8\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.743373 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9wz\" (UniqueName: \"kubernetes.io/projected/311b014f-099c-4f63-a46e-ccf2684847db-kube-api-access-zm9wz\") pod \"oauth-openshift-558db77b4-x76bn\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.757668 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fc5ef3-7c5f-4920-8509-a4566b3e3c7d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nm78c\" (UID: \"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.765720 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.782455 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2kr\" (UniqueName: \"kubernetes.io/projected/cba6b5e8-eb8a-40fb-b684-c7f08ef491c5-kube-api-access-kc2kr\") pod \"openshift-controller-manager-operator-756b6f6bc6-49ppc\" (UID: \"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.783813 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.797027 4767 request.go:700] Waited for 1.89754546s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.798678 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6sq\" (UniqueName: \"kubernetes.io/projected/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-kube-api-access-9n6sq\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.818907 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87p4s\" (UniqueName: \"kubernetes.io/projected/72e9e13e-3775-4751-9b9c-466f114cff18-kube-api-access-87p4s\") pod \"dns-operator-744455d44c-f7dpz\" (UID: \"72e9e13e-3775-4751-9b9c-466f114cff18\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.831239 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv4t2\" (UniqueName: \"kubernetes.io/projected/cd491358-4379-40eb-a9b1-285abcbeb89c-kube-api-access-zv4t2\") pod \"apiserver-7bbb656c7d-2c4l5\" (UID: \"cd491358-4379-40eb-a9b1-285abcbeb89c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.837408 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.846434 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.855332 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85d5\" (UniqueName: \"kubernetes.io/projected/2ece62fa-00e6-4507-8fdd-ceca96eea6f9-kube-api-access-g85d5\") pod \"openshift-apiserver-operator-796bbdcf4f-whszl\" (UID: \"2ece62fa-00e6-4507-8fdd-ceca96eea6f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.877443 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbqlp\" (UniqueName: \"kubernetes.io/projected/571ff756-8e0a-4959-9dcb-b2c9aff1e7c0-kube-api-access-lbqlp\") pod \"cluster-samples-operator-665b6dd947-f6j64\" (UID: \"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.893835 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hzkz\" (UniqueName: \"kubernetes.io/projected/c4651fee-37da-4038-895d-4b483d41240e-kube-api-access-8hzkz\") pod \"etcd-operator-b45778765-h9pjp\" (UID: \"c4651fee-37da-4038-895d-4b483d41240e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.912012 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slj22\" (UniqueName: \"kubernetes.io/projected/18770b7d-cd23-4e8b-89e5-67986cfbad15-kube-api-access-slj22\") pod \"authentication-operator-69f744f599-gmxsv\" (UID: \"18770b7d-cd23-4e8b-89e5-67986cfbad15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.924857 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.936137 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vws5j\" (UniqueName: \"kubernetes.io/projected/a867a71a-121a-4f12-8c81-7b14f0a4fd16-kube-api-access-vws5j\") pod \"machine-config-controller-84d6567774-5xcvh\" (UID: \"a867a71a-121a-4f12-8c81-7b14f0a4fd16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.940927 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.953040 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.953581 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn6x7\" (UniqueName: \"kubernetes.io/projected/6c3520aa-b012-4e35-8336-6655ef28eae8-kube-api-access-vn6x7\") pod \"apiserver-76f77b778f-vdb2k\" (UID: \"6c3520aa-b012-4e35-8336-6655ef28eae8\") " pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.956687 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.962927 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c"] Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.970344 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.979638 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4pp\" (UniqueName: \"kubernetes.io/projected/6b463b5d-b072-4032-aa46-9abe955f901b-kube-api-access-sz4pp\") pod \"downloads-7954f5f757-4d8cc\" (UID: \"6b463b5d-b072-4032-aa46-9abe955f901b\") " pod="openshift-console/downloads-7954f5f757-4d8cc" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.989592 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" Nov 24 21:41:09 crc kubenswrapper[4767]: I1124 21:41:09.993522 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwqw\" (UniqueName: \"kubernetes.io/projected/1e7aaeee-4486-42e5-be43-cdc4d23aa445-kube-api-access-9dwqw\") pod \"console-operator-58897d9998-nlpg6\" (UID: \"1e7aaeee-4486-42e5-be43-cdc4d23aa445\") " pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.016967 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9h8\" (UniqueName: \"kubernetes.io/projected/0d541163-dd1b-4486-9939-4eaa9ec350bf-kube-api-access-mn9h8\") pod \"machine-approver-56656f9798-l49k7\" (UID: \"0d541163-dd1b-4486-9939-4eaa9ec350bf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.021010 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.036442 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/071ceb07-cd7e-43a9-b9f0-c2ef0837f336-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zck7s\" (UID: \"071ceb07-cd7e-43a9-b9f0-c2ef0837f336\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.048020 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.051652 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l56bt\" (UniqueName: \"kubernetes.io/projected/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-kube-api-access-l56bt\") pod \"controller-manager-879f6c89f-wrbrz\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.052441 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.052769 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.064207 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.073895 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp945\" (UniqueName: \"kubernetes.io/projected/86bad83e-cde9-43a8-803a-fda0e14ef559-kube-api-access-hp945\") pod \"console-f9d7485db-mp4ng\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.074009 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.101190 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.115672 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" event={"ID":"d58a04f0-dcce-4a15-9248-06fe40d8fceb","Type":"ContainerStarted","Data":"5badbc8209128a80466bef3436358bf63c4fdffb24a65984578ca4f7f4b9dd9a"} Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.122912 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" event={"ID":"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d","Type":"ContainerStarted","Data":"b224384e2f7ee1b79fd1cef2b862e8eab51dd2a1e4c7030b24ce0a1e72e2f8df"} Nov 24 21:41:10 crc kubenswrapper[4767]: W1124 21:41:10.125550 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5e859ee_0d8c_48c7_8251_25c79a040f99.slice/crio-0516923c7ba14308d1dde757a2b9477dd008c59a7c97632e6b580957164599d4 WatchSource:0}: Error finding container 0516923c7ba14308d1dde757a2b9477dd008c59a7c97632e6b580957164599d4: Status 404 returned error can't find the container with id 0516923c7ba14308d1dde757a2b9477dd008c59a7c97632e6b580957164599d4 Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.136852 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.136885 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-stats-auth\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.136903 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8bc\" (UniqueName: \"kubernetes.io/projected/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-kube-api-access-sd8bc\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.136949 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04c820d8-acd5-42ce-8c38-7027eae3d43d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137020 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjl9j\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-kube-api-access-zjl9j\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137034 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-metrics-certs\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137085 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45b0fcf9-821d-4504-acf3-2d1cfb83d093-images\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137101 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcjv\" (UniqueName: \"kubernetes.io/projected/9639809f-913e-44e8-91db-731add21e1a4-kube-api-access-wjcjv\") pod \"migrator-59844c95c7-qgfw9\" (UID: \"9639809f-913e-44e8-91db-731add21e1a4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137151 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6nlq\" (UniqueName: \"kubernetes.io/projected/8c0bd833-4b37-400e-8394-e8311efb343b-kube-api-access-q6nlq\") pod \"multus-admission-controller-857f4d67dd-n8vvb\" (UID: \"8c0bd833-4b37-400e-8394-e8311efb343b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137168 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-bound-sa-token\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137191 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-trusted-ca\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137226 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mtz\" (UniqueName: \"kubernetes.io/projected/45b0fcf9-821d-4504-acf3-2d1cfb83d093-kube-api-access-f5mtz\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137253 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-tls\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137295 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137312 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-images\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137333 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b097a05-812b-4417-9410-fef3f70a193f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137349 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-default-certificate\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137383 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b097a05-812b-4417-9410-fef3f70a193f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137397 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-proxy-tls\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137445 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b097a05-812b-4417-9410-fef3f70a193f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137475 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b0fcf9-821d-4504-acf3-2d1cfb83d093-config\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137489 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45b0fcf9-821d-4504-acf3-2d1cfb83d093-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137507 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c8e36e-716b-42d4-92f2-21540ab8568a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137593 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68sm\" (UniqueName: \"kubernetes.io/projected/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-kube-api-access-g68sm\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137610 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8c0bd833-4b37-400e-8394-e8311efb343b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n8vvb\" (UID: \"8c0bd833-4b37-400e-8394-e8311efb343b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137639 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-certificates\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137655 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk829\" (UniqueName: \"kubernetes.io/projected/84c8e36e-716b-42d4-92f2-21540ab8568a-kube-api-access-qk829\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137702 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-service-ca-bundle\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137718 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c8e36e-716b-42d4-92f2-21540ab8568a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.137743 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04c820d8-acd5-42ce-8c38-7027eae3d43d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.139236 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:10.639224221 +0000 UTC m=+153.556207583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.169699 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.196524 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.212472 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4d8cc" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.236953 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239172 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239375 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b097a05-812b-4417-9410-fef3f70a193f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.239406 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:10.739381875 +0000 UTC m=+153.656365297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239438 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b0fcf9-821d-4504-acf3-2d1cfb83d093-config\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239480 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45b0fcf9-821d-4504-acf3-2d1cfb83d093-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239517 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c8e36e-716b-42d4-92f2-21540ab8568a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239564 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-csi-data-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239587 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/543e4218-8da0-43fe-bf43-1ec803edcc30-srv-cert\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239651 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2201399d-776a-43fb-94cc-c288a6dae7df-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239705 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g68sm\" (UniqueName: \"kubernetes.io/projected/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-kube-api-access-g68sm\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239745 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239778 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8c0bd833-4b37-400e-8394-e8311efb343b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n8vvb\" (UID: \"8c0bd833-4b37-400e-8394-e8311efb343b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239803 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-certificates\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239861 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c8e36e-716b-42d4-92f2-21540ab8568a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239883 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk829\" (UniqueName: \"kubernetes.io/projected/84c8e36e-716b-42d4-92f2-21540ab8568a-kube-api-access-qk829\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239904 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-service-ca-bundle\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239955 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04c820d8-acd5-42ce-8c38-7027eae3d43d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.239993 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8c8z\" (UniqueName: \"kubernetes.io/projected/863df8e8-3e7f-4d7e-bb01-c63359a9024c-kube-api-access-p8c8z\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240028 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1883436-188c-45d6-b63b-45cdc822fe99-cert\") pod \"ingress-canary-6fxbb\" (UID: \"b1883436-188c-45d6-b63b-45cdc822fe99\") " pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240063 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a40305b2-c53d-4aa0-8b36-80485e145c46-config\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240100 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hchkk\" (UniqueName: \"kubernetes.io/projected/543e4218-8da0-43fe-bf43-1ec803edcc30-kube-api-access-hchkk\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240122 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8788fcfc-dcff-417e-af1b-1a0938543820-metrics-tls\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240157 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240179 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b0fcf9-821d-4504-acf3-2d1cfb83d093-config\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240179 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8bc\" (UniqueName: \"kubernetes.io/projected/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-kube-api-access-sd8bc\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240222 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-stats-auth\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240245 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e329e7d-cfae-4b82-8864-5166dce6a68d-node-bootstrap-token\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240281 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7q6\" (UniqueName: \"kubernetes.io/projected/207b1355-917a-4e05-b680-45f50ec116dd-kube-api-access-cn7q6\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240302 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hw8k\" (UniqueName: \"kubernetes.io/projected/2201399d-776a-43fb-94cc-c288a6dae7df-kube-api-access-6hw8k\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240317 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c88eb915-2203-4a33-ba3e-ba039aa01296-serving-cert\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240345 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-plugins-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240362 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8788fcfc-dcff-417e-af1b-1a0938543820-config-volume\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.241429 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c8e36e-716b-42d4-92f2-21540ab8568a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.242255 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-service-ca-bundle\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.242604 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04c820d8-acd5-42ce-8c38-7027eae3d43d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.242759 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.243903 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-certificates\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.240380 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfl5d\" (UniqueName: \"kubernetes.io/projected/3e329e7d-cfae-4b82-8864-5166dce6a68d-kube-api-access-bfl5d\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.245595 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c8e36e-716b-42d4-92f2-21540ab8568a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.244741 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sg9\" (UniqueName: \"kubernetes.io/projected/c88eb915-2203-4a33-ba3e-ba039aa01296-kube-api-access-49sg9\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.245783 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04c820d8-acd5-42ce-8c38-7027eae3d43d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.245869 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/207b1355-917a-4e05-b680-45f50ec116dd-metrics-tls\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.245939 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk5jz\" (UniqueName: \"kubernetes.io/projected/e8d6ce66-68d1-45fd-9e54-6baedf990e1d-kube-api-access-pk5jz\") pod \"control-plane-machine-set-operator-78cbb6b69f-mssg2\" (UID: \"e8d6ce66-68d1-45fd-9e54-6baedf990e1d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.246012 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88eb915-2203-4a33-ba3e-ba039aa01296-config\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.246075 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r994\" (UniqueName: \"kubernetes.io/projected/b1883436-188c-45d6-b63b-45cdc822fe99-kube-api-access-8r994\") pod \"ingress-canary-6fxbb\" (UID: \"b1883436-188c-45d6-b63b-45cdc822fe99\") " pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.253837 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b38e529-9f0b-443d-b320-60935a568f07-signing-cabundle\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.253902 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-metrics-certs\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.253939 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshdh\" (UniqueName: \"kubernetes.io/projected/f70726f8-befa-4ac3-8157-01c02fd1b2f1-kube-api-access-hshdh\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.253973 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjl9j\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-kube-api-access-zjl9j\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.254076 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a40305b2-c53d-4aa0-8b36-80485e145c46-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.254131 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/207b1355-917a-4e05-b680-45f50ec116dd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.254158 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-stats-auth\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.254193 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjcjv\" (UniqueName: \"kubernetes.io/projected/9639809f-913e-44e8-91db-731add21e1a4-kube-api-access-wjcjv\") pod \"migrator-59844c95c7-qgfw9\" (UID: \"9639809f-913e-44e8-91db-731add21e1a4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.254227 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45b0fcf9-821d-4504-acf3-2d1cfb83d093-images\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.254304 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f70726f8-befa-4ac3-8157-01c02fd1b2f1-tmpfs\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.256375 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45b0fcf9-821d-4504-acf3-2d1cfb83d093-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.256842 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8c0bd833-4b37-400e-8394-e8311efb343b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n8vvb\" (UID: \"8c0bd833-4b37-400e-8394-e8311efb343b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.276057 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b097a05-812b-4417-9410-fef3f70a193f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.276514 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6nlq\" (UniqueName: \"kubernetes.io/projected/8c0bd833-4b37-400e-8394-e8311efb343b-kube-api-access-q6nlq\") pod \"multus-admission-controller-857f4d67dd-n8vvb\" (UID: \"8c0bd833-4b37-400e-8394-e8311efb343b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.276658 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-bound-sa-token\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.276860 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-trusted-ca\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.277316 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqqq\" (UniqueName: \"kubernetes.io/projected/0a0c5d70-78fa-42c1-9e79-745b42839d04-kube-api-access-lpqqq\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.278515 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.278835 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nptk\" (UniqueName: \"kubernetes.io/projected/b0934816-1e19-4894-a691-f3e53551062a-kube-api-access-8nptk\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.279394 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45b0fcf9-821d-4504-acf3-2d1cfb83d093-images\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.279603 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-trusted-ca\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.279815 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a40305b2-c53d-4aa0-8b36-80485e145c46-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.279923 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mtz\" (UniqueName: \"kubernetes.io/projected/45b0fcf9-821d-4504-acf3-2d1cfb83d093-kube-api-access-f5mtz\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.280251 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g6m\" (UniqueName: \"kubernetes.io/projected/0f5cd5d9-8313-4279-91eb-74a4b5c525e8-kube-api-access-t2g6m\") pod \"package-server-manager-789f6589d5-5klgr\" (UID: \"0f5cd5d9-8313-4279-91eb-74a4b5c525e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.280318 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2c5q\" (UniqueName: \"kubernetes.io/projected/8788fcfc-dcff-417e-af1b-1a0938543820-kube-api-access-l2c5q\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.280426 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2201399d-776a-43fb-94cc-c288a6dae7df-srv-cert\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282295 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f5cd5d9-8313-4279-91eb-74a4b5c525e8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5klgr\" (UID: \"0f5cd5d9-8313-4279-91eb-74a4b5c525e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282336 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-registration-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282390 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-socket-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e329e7d-cfae-4b82-8864-5166dce6a68d-certs\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282469 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0934816-1e19-4894-a691-f3e53551062a-secret-volume\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282518 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-tls\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282540 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0934816-1e19-4894-a691-f3e53551062a-config-volume\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282599 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2wl\" (UniqueName: \"kubernetes.io/projected/1b38e529-9f0b-443d-b320-60935a568f07-kube-api-access-2c2wl\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282628 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282652 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-images\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.282738 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/207b1355-917a-4e05-b680-45f50ec116dd-trusted-ca\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.282981 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:10.782969207 +0000 UTC m=+153.699952579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.284150 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-metrics-certs\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.284557 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b097a05-812b-4417-9410-fef3f70a193f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.284582 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-default-certificate\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.284634 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70726f8-befa-4ac3-8157-01c02fd1b2f1-apiservice-cert\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.284731 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b097a05-812b-4417-9410-fef3f70a193f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.285359 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-images\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.285961 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b097a05-812b-4417-9410-fef3f70a193f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.287904 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-tls\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.288003 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-proxy-tls\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.288073 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-mountpoint-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.288205 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b38e529-9f0b-443d-b320-60935a568f07-signing-key\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.288351 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.288436 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/543e4218-8da0-43fe-bf43-1ec803edcc30-profile-collector-cert\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.288574 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70726f8-befa-4ac3-8157-01c02fd1b2f1-webhook-cert\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.289165 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8d6ce66-68d1-45fd-9e54-6baedf990e1d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mssg2\" (UID: \"e8d6ce66-68d1-45fd-9e54-6baedf990e1d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.289702 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04c820d8-acd5-42ce-8c38-7027eae3d43d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.289868 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-default-certificate\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.295372 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8bc\" (UniqueName: \"kubernetes.io/projected/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-kube-api-access-sd8bc\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.298161 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.298485 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b097a05-812b-4417-9410-fef3f70a193f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6b8mk\" (UID: \"8b097a05-812b-4417-9410-fef3f70a193f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.304655 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed6be1b3-7da9-4f00-b7ed-3570e02210ca-proxy-tls\") pod \"machine-config-operator-74547568cd-l2gbt\" (UID: \"ed6be1b3-7da9-4f00-b7ed-3570e02210ca\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.316692 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g68sm\" (UniqueName: \"kubernetes.io/projected/ba0198db-c2d9-4b09-bb3c-88f60a4382c1-kube-api-access-g68sm\") pod \"router-default-5444994796-gc994\" (UID: \"ba0198db-c2d9-4b09-bb3c-88f60a4382c1\") " pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.337599 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk829\" (UniqueName: \"kubernetes.io/projected/84c8e36e-716b-42d4-92f2-21540ab8568a-kube-api-access-qk829\") pod \"kube-storage-version-migrator-operator-b67b599dd-mv2hv\" (UID: \"84c8e36e-716b-42d4-92f2-21540ab8568a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.381623 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.383019 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjcjv\" (UniqueName: \"kubernetes.io/projected/9639809f-913e-44e8-91db-731add21e1a4-kube-api-access-wjcjv\") pod \"migrator-59844c95c7-qgfw9\" (UID: \"9639809f-913e-44e8-91db-731add21e1a4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.393704 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.393769 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.393994 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8788fcfc-dcff-417e-af1b-1a0938543820-config-volume\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394022 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfl5d\" (UniqueName: \"kubernetes.io/projected/3e329e7d-cfae-4b82-8864-5166dce6a68d-kube-api-access-bfl5d\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394048 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49sg9\" (UniqueName: \"kubernetes.io/projected/c88eb915-2203-4a33-ba3e-ba039aa01296-kube-api-access-49sg9\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394073 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/207b1355-917a-4e05-b680-45f50ec116dd-metrics-tls\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394099 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk5jz\" (UniqueName: \"kubernetes.io/projected/e8d6ce66-68d1-45fd-9e54-6baedf990e1d-kube-api-access-pk5jz\") pod \"control-plane-machine-set-operator-78cbb6b69f-mssg2\" (UID: \"e8d6ce66-68d1-45fd-9e54-6baedf990e1d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394124 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r994\" (UniqueName: \"kubernetes.io/projected/b1883436-188c-45d6-b63b-45cdc822fe99-kube-api-access-8r994\") pod \"ingress-canary-6fxbb\" (UID: \"b1883436-188c-45d6-b63b-45cdc822fe99\") " pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394143 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88eb915-2203-4a33-ba3e-ba039aa01296-config\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394176 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b38e529-9f0b-443d-b320-60935a568f07-signing-cabundle\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394206 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshdh\" (UniqueName: \"kubernetes.io/projected/f70726f8-befa-4ac3-8157-01c02fd1b2f1-kube-api-access-hshdh\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394238 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/207b1355-917a-4e05-b680-45f50ec116dd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394259 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a40305b2-c53d-4aa0-8b36-80485e145c46-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.394306 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:10.89428773 +0000 UTC m=+153.811271102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.396817 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f70726f8-befa-4ac3-8157-01c02fd1b2f1-tmpfs\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397006 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqqq\" (UniqueName: \"kubernetes.io/projected/0a0c5d70-78fa-42c1-9e79-745b42839d04-kube-api-access-lpqqq\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397027 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nptk\" (UniqueName: \"kubernetes.io/projected/b0934816-1e19-4894-a691-f3e53551062a-kube-api-access-8nptk\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397044 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a40305b2-c53d-4aa0-8b36-80485e145c46-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397095 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2g6m\" (UniqueName: \"kubernetes.io/projected/0f5cd5d9-8313-4279-91eb-74a4b5c525e8-kube-api-access-t2g6m\") pod \"package-server-manager-789f6589d5-5klgr\" (UID: \"0f5cd5d9-8313-4279-91eb-74a4b5c525e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397111 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2c5q\" (UniqueName: \"kubernetes.io/projected/8788fcfc-dcff-417e-af1b-1a0938543820-kube-api-access-l2c5q\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397129 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2201399d-776a-43fb-94cc-c288a6dae7df-srv-cert\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397145 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f5cd5d9-8313-4279-91eb-74a4b5c525e8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5klgr\" (UID: \"0f5cd5d9-8313-4279-91eb-74a4b5c525e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397163 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-registration-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397178 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-socket-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397195 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e329e7d-cfae-4b82-8864-5166dce6a68d-certs\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397211 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0934816-1e19-4894-a691-f3e53551062a-secret-volume\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397227 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2wl\" (UniqueName: \"kubernetes.io/projected/1b38e529-9f0b-443d-b320-60935a568f07-kube-api-access-2c2wl\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397242 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0934816-1e19-4894-a691-f3e53551062a-config-volume\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397281 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397298 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/207b1355-917a-4e05-b680-45f50ec116dd-trusted-ca\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.394906 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8788fcfc-dcff-417e-af1b-1a0938543820-config-volume\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397757 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f70726f8-befa-4ac3-8157-01c02fd1b2f1-tmpfs\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.396051 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88eb915-2203-4a33-ba3e-ba039aa01296-config\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.397977 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b38e529-9f0b-443d-b320-60935a568f07-signing-cabundle\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.398026 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-registration-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.398260 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-socket-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.399320 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:10.899302913 +0000 UTC m=+153.816286365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.399883 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjl9j\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-kube-api-access-zjl9j\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.400577 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0934816-1e19-4894-a691-f3e53551062a-config-volume\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.400891 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/207b1355-917a-4e05-b680-45f50ec116dd-trusted-ca\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.401716 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70726f8-befa-4ac3-8157-01c02fd1b2f1-apiservice-cert\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.401836 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-mountpoint-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.401865 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b38e529-9f0b-443d-b320-60935a568f07-signing-key\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.401905 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.401930 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/543e4218-8da0-43fe-bf43-1ec803edcc30-profile-collector-cert\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.401957 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70726f8-befa-4ac3-8157-01c02fd1b2f1-webhook-cert\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.401985 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8d6ce66-68d1-45fd-9e54-6baedf990e1d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mssg2\" (UID: \"e8d6ce66-68d1-45fd-9e54-6baedf990e1d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.402021 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-csi-data-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.403536 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-mountpoint-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.403893 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/543e4218-8da0-43fe-bf43-1ec803edcc30-srv-cert\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.403928 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2201399d-776a-43fb-94cc-c288a6dae7df-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.403969 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404019 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8c8z\" (UniqueName: \"kubernetes.io/projected/863df8e8-3e7f-4d7e-bb01-c63359a9024c-kube-api-access-p8c8z\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404043 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1883436-188c-45d6-b63b-45cdc822fe99-cert\") pod \"ingress-canary-6fxbb\" (UID: \"b1883436-188c-45d6-b63b-45cdc822fe99\") " pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404063 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a40305b2-c53d-4aa0-8b36-80485e145c46-config\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404097 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8788fcfc-dcff-417e-af1b-1a0938543820-metrics-tls\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404119 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hchkk\" (UniqueName: \"kubernetes.io/projected/543e4218-8da0-43fe-bf43-1ec803edcc30-kube-api-access-hchkk\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404146 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e329e7d-cfae-4b82-8864-5166dce6a68d-node-bootstrap-token\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404171 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn7q6\" (UniqueName: \"kubernetes.io/projected/207b1355-917a-4e05-b680-45f50ec116dd-kube-api-access-cn7q6\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404194 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hw8k\" (UniqueName: \"kubernetes.io/projected/2201399d-776a-43fb-94cc-c288a6dae7df-kube-api-access-6hw8k\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404216 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c88eb915-2203-4a33-ba3e-ba039aa01296-serving-cert\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404218 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/207b1355-917a-4e05-b680-45f50ec116dd-metrics-tls\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404236 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-plugins-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404417 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-csi-data-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404454 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.404868 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70726f8-befa-4ac3-8157-01c02fd1b2f1-apiservice-cert\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.406711 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/863df8e8-3e7f-4d7e-bb01-c63359a9024c-plugins-dir\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.407260 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2201399d-776a-43fb-94cc-c288a6dae7df-srv-cert\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.407806 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f5cd5d9-8313-4279-91eb-74a4b5c525e8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5klgr\" (UID: \"0f5cd5d9-8313-4279-91eb-74a4b5c525e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.409496 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0934816-1e19-4894-a691-f3e53551062a-secret-volume\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.414370 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.414514 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a40305b2-c53d-4aa0-8b36-80485e145c46-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.414924 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70726f8-befa-4ac3-8157-01c02fd1b2f1-webhook-cert\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.414965 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8d6ce66-68d1-45fd-9e54-6baedf990e1d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mssg2\" (UID: \"e8d6ce66-68d1-45fd-9e54-6baedf990e1d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.414987 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e329e7d-cfae-4b82-8864-5166dce6a68d-certs\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.415031 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b38e529-9f0b-443d-b320-60935a568f07-signing-key\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.415403 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a40305b2-c53d-4aa0-8b36-80485e145c46-config\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.415583 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.417735 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8788fcfc-dcff-417e-af1b-1a0938543820-metrics-tls\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.418196 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.420373 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.421233 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c88eb915-2203-4a33-ba3e-ba039aa01296-serving-cert\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.421530 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-bound-sa-token\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.421785 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2201399d-776a-43fb-94cc-c288a6dae7df-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.422066 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e329e7d-cfae-4b82-8864-5166dce6a68d-node-bootstrap-token\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.422448 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/543e4218-8da0-43fe-bf43-1ec803edcc30-profile-collector-cert\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.422636 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1883436-188c-45d6-b63b-45cdc822fe99-cert\") pod \"ingress-canary-6fxbb\" (UID: \"b1883436-188c-45d6-b63b-45cdc822fe99\") " pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.424933 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/543e4218-8da0-43fe-bf43-1ec803edcc30-srv-cert\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.426608 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.436089 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mtz\" (UniqueName: \"kubernetes.io/projected/45b0fcf9-821d-4504-acf3-2d1cfb83d093-kube-api-access-f5mtz\") pod \"machine-api-operator-5694c8668f-mx2n5\" (UID: \"45b0fcf9-821d-4504-acf3-2d1cfb83d093\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.451734 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.456403 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6nlq\" (UniqueName: \"kubernetes.io/projected/8c0bd833-4b37-400e-8394-e8311efb343b-kube-api-access-q6nlq\") pod \"multus-admission-controller-857f4d67dd-n8vvb\" (UID: \"8c0bd833-4b37-400e-8394-e8311efb343b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:10 crc kubenswrapper[4767]: W1124 21:41:10.485210 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcba6b5e8_eb8a_40fb_b684_c7f08ef491c5.slice/crio-631435a3a408b326b9207234ac00976667fb54e28445e13aca9429b5c61b5e61 WatchSource:0}: Error finding container 631435a3a408b326b9207234ac00976667fb54e28445e13aca9429b5c61b5e61: Status 404 returned error can't find the container with id 631435a3a408b326b9207234ac00976667fb54e28445e13aca9429b5c61b5e61 Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.485589 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.490664 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7dpz"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.499346 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x76bn"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.501923 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfl5d\" (UniqueName: \"kubernetes.io/projected/3e329e7d-cfae-4b82-8864-5166dce6a68d-kube-api-access-bfl5d\") pod \"machine-config-server-fk657\" (UID: \"3e329e7d-cfae-4b82-8864-5166dce6a68d\") " pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.507164 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.507288 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.00725756 +0000 UTC m=+153.924240932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.507552 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.507912 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.007902248 +0000 UTC m=+153.924885630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.515753 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r994\" (UniqueName: \"kubernetes.io/projected/b1883436-188c-45d6-b63b-45cdc822fe99-kube-api-access-8r994\") pod \"ingress-canary-6fxbb\" (UID: \"b1883436-188c-45d6-b63b-45cdc822fe99\") " pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.524713 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6fxbb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.534145 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk5jz\" (UniqueName: \"kubernetes.io/projected/e8d6ce66-68d1-45fd-9e54-6baedf990e1d-kube-api-access-pk5jz\") pod \"control-plane-machine-set-operator-78cbb6b69f-mssg2\" (UID: \"e8d6ce66-68d1-45fd-9e54-6baedf990e1d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:10 crc kubenswrapper[4767]: W1124 21:41:10.541934 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e9e13e_3775_4751_9b9c_466f114cff18.slice/crio-7a09e867af4e521431fc4a5a165acdc2d94b30d1a636a495532efe238f3f1430 WatchSource:0}: Error finding container 7a09e867af4e521431fc4a5a165acdc2d94b30d1a636a495532efe238f3f1430: Status 404 returned error can't find the container with id 7a09e867af4e521431fc4a5a165acdc2d94b30d1a636a495532efe238f3f1430 Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.562826 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49sg9\" (UniqueName: \"kubernetes.io/projected/c88eb915-2203-4a33-ba3e-ba039aa01296-kube-api-access-49sg9\") pod \"service-ca-operator-777779d784-rs9p9\" (UID: \"c88eb915-2203-4a33-ba3e-ba039aa01296\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.566259 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mp4ng"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.567943 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4d8cc"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.574447 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h9pjp"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.579309 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshdh\" (UniqueName: \"kubernetes.io/projected/f70726f8-befa-4ac3-8157-01c02fd1b2f1-kube-api-access-hshdh\") pod \"packageserver-d55dfcdfc-h9jtq\" (UID: \"f70726f8-befa-4ac3-8157-01c02fd1b2f1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.583110 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gmxsv"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.590669 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.591570 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/207b1355-917a-4e05-b680-45f50ec116dd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.608893 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.609216 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.109197165 +0000 UTC m=+154.026180547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.609766 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.610093 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.11008179 +0000 UTC m=+154.027065162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.618235 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqqq\" (UniqueName: \"kubernetes.io/projected/0a0c5d70-78fa-42c1-9e79-745b42839d04-kube-api-access-lpqqq\") pod \"marketplace-operator-79b997595-fpc7v\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.640990 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nptk\" (UniqueName: \"kubernetes.io/projected/b0934816-1e19-4894-a691-f3e53551062a-kube-api-access-8nptk\") pod \"collect-profiles-29400330-vrtxk\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: W1124 21:41:10.650337 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b463b5d_b072_4032_aa46_9abe955f901b.slice/crio-53dcbc477b5ddbba50933bf1fb38ccedfe774b3044807a4168c32fbc22998a16 WatchSource:0}: Error finding container 53dcbc477b5ddbba50933bf1fb38ccedfe774b3044807a4168c32fbc22998a16: Status 404 returned error can't find the container with id 53dcbc477b5ddbba50933bf1fb38ccedfe774b3044807a4168c32fbc22998a16 Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.652828 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nlpg6"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.654221 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vdb2k"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.659244 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.662769 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a40305b2-c53d-4aa0-8b36-80485e145c46-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pv7sg\" (UID: \"a40305b2-c53d-4aa0-8b36-80485e145c46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.679762 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2g6m\" (UniqueName: \"kubernetes.io/projected/0f5cd5d9-8313-4279-91eb-74a4b5c525e8-kube-api-access-t2g6m\") pod \"package-server-manager-789f6589d5-5klgr\" (UID: \"0f5cd5d9-8313-4279-91eb-74a4b5c525e8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.691830 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2c5q\" (UniqueName: \"kubernetes.io/projected/8788fcfc-dcff-417e-af1b-1a0938543820-kube-api-access-l2c5q\") pod \"dns-default-qdgxl\" (UID: \"8788fcfc-dcff-417e-af1b-1a0938543820\") " pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.711228 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.711408 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.211355797 +0000 UTC m=+154.128339169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.711502 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.711820 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.21180786 +0000 UTC m=+154.128791232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: W1124 21:41:10.724040 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e7aaeee_4486_42e5_be43_cdc4d23aa445.slice/crio-92562026eb06e28a4010c42295fa8ac039a06a880260dbf94975ab7c38b04ca8 WatchSource:0}: Error finding container 92562026eb06e28a4010c42295fa8ac039a06a880260dbf94975ab7c38b04ca8: Status 404 returned error can't find the container with id 92562026eb06e28a4010c42295fa8ac039a06a880260dbf94975ab7c38b04ca8 Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.728428 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.743824 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.755633 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.759320 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hw8k\" (UniqueName: \"kubernetes.io/projected/2201399d-776a-43fb-94cc-c288a6dae7df-kube-api-access-6hw8k\") pod \"olm-operator-6b444d44fb-8r9jr\" (UID: \"2201399d-776a-43fb-94cc-c288a6dae7df\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.759662 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2wl\" (UniqueName: \"kubernetes.io/projected/1b38e529-9f0b-443d-b320-60935a568f07-kube-api-access-2c2wl\") pod \"service-ca-9c57cc56f-29qkn\" (UID: \"1b38e529-9f0b-443d-b320-60935a568f07\") " pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.771640 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.778151 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.778497 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8c8z\" (UniqueName: \"kubernetes.io/projected/863df8e8-3e7f-4d7e-bb01-c63359a9024c-kube-api-access-p8c8z\") pod \"csi-hostpathplugin-qcv2q\" (UID: \"863df8e8-3e7f-4d7e-bb01-c63359a9024c\") " pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.780980 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hchkk\" (UniqueName: \"kubernetes.io/projected/543e4218-8da0-43fe-bf43-1ec803edcc30-kube-api-access-hchkk\") pod \"catalog-operator-68c6474976-9mbml\" (UID: \"543e4218-8da0-43fe-bf43-1ec803edcc30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.785511 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.791419 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.798043 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn7q6\" (UniqueName: \"kubernetes.io/projected/207b1355-917a-4e05-b680-45f50ec116dd-kube-api-access-cn7q6\") pod \"ingress-operator-5b745b69d9-6h6nj\" (UID: \"207b1355-917a-4e05-b680-45f50ec116dd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.798779 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.800022 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fk657" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.814450 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.814723 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.314709082 +0000 UTC m=+154.231692454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.814794 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.816888 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.818942 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.838603 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.852475 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.856083 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.919278 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:10 crc kubenswrapper[4767]: E1124 21:41:10.919595 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.419582721 +0000 UTC m=+154.336566093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.920160 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.945953 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv"] Nov 24 21:41:10 crc kubenswrapper[4767]: I1124 21:41:10.976365 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wrbrz"] Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.020548 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.021017 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.520997521 +0000 UTC m=+154.437980893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.024346 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6fxbb"] Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.034821 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" Nov 24 21:41:11 crc kubenswrapper[4767]: W1124 21:41:11.053922 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b097a05_812b_4417_9410_fef3f70a193f.slice/crio-dd1efa2537b3830fc2c7908fd034b762615063c048efcef8de191d11e920ce25 WatchSource:0}: Error finding container dd1efa2537b3830fc2c7908fd034b762615063c048efcef8de191d11e920ce25: Status 404 returned error can't find the container with id dd1efa2537b3830fc2c7908fd034b762615063c048efcef8de191d11e920ce25 Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.061724 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:11 crc kubenswrapper[4767]: W1124 21:41:11.066497 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84c8e36e_716b_42d4_92f2_21540ab8568a.slice/crio-8a19a6b993a83c9c993485052e97f3b406e515b9873cd64f3b08b3bfc1009117 WatchSource:0}: Error finding container 8a19a6b993a83c9c993485052e97f3b406e515b9873cd64f3b08b3bfc1009117: Status 404 returned error can't find the container with id 8a19a6b993a83c9c993485052e97f3b406e515b9873cd64f3b08b3bfc1009117 Nov 24 21:41:11 crc kubenswrapper[4767]: W1124 21:41:11.069072 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7f7d9e2_58aa_4606_bca2_0e02f7a7f759.slice/crio-595a32c30272da94613a54eb329de4c79c646bea87399b9ea79ef7045e751ee6 WatchSource:0}: Error finding container 595a32c30272da94613a54eb329de4c79c646bea87399b9ea79ef7045e751ee6: Status 404 returned error can't find the container with id 595a32c30272da94613a54eb329de4c79c646bea87399b9ea79ef7045e751ee6 Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.123114 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.123292 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.623262186 +0000 UTC m=+154.540245558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.132899 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" event={"ID":"9639809f-913e-44e8-91db-731add21e1a4","Type":"ContainerStarted","Data":"df925df3c766a8e2f44ff6e372f155f29a45687287babb8dcbdf7cf92c563c7a"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.153780 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" event={"ID":"0d541163-dd1b-4486-9939-4eaa9ec350bf","Type":"ContainerStarted","Data":"ae02e198930b5b01a542bbee7bba83fb9ea348f04865b7dedf4925b193c08aca"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.154078 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" event={"ID":"0d541163-dd1b-4486-9939-4eaa9ec350bf","Type":"ContainerStarted","Data":"3c42b3b5fd285903303756da44f536eb13d81311c7d62bf7f8b2b69e776d327c"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.155084 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4d8cc" event={"ID":"6b463b5d-b072-4032-aa46-9abe955f901b","Type":"ContainerStarted","Data":"53dcbc477b5ddbba50933bf1fb38ccedfe774b3044807a4168c32fbc22998a16"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.156679 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" event={"ID":"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5","Type":"ContainerStarted","Data":"631435a3a408b326b9207234ac00976667fb54e28445e13aca9429b5c61b5e61"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.162096 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" event={"ID":"311b014f-099c-4f63-a46e-ccf2684847db","Type":"ContainerStarted","Data":"1d6d8a96cab40351985b41fb594a6731508bb145cb86d2f8893e89a1f63c8c58"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.177022 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" event={"ID":"84c8e36e-716b-42d4-92f2-21540ab8568a","Type":"ContainerStarted","Data":"8a19a6b993a83c9c993485052e97f3b406e515b9873cd64f3b08b3bfc1009117"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.182087 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" event={"ID":"8b097a05-812b-4417-9410-fef3f70a193f","Type":"ContainerStarted","Data":"dd1efa2537b3830fc2c7908fd034b762615063c048efcef8de191d11e920ce25"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.193505 4767 generic.go:334] "Generic (PLEG): container finished" podID="d5e859ee-0d8c-48c7-8251-25c79a040f99" containerID="90360128d2566a4192ab716a4ae6c227b410fccde3b21a57c7d8b0c0a43a23ba" exitCode=0 Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.193805 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" event={"ID":"d5e859ee-0d8c-48c7-8251-25c79a040f99","Type":"ContainerDied","Data":"90360128d2566a4192ab716a4ae6c227b410fccde3b21a57c7d8b0c0a43a23ba"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.193861 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" event={"ID":"d5e859ee-0d8c-48c7-8251-25c79a040f99","Type":"ContainerStarted","Data":"0516923c7ba14308d1dde757a2b9477dd008c59a7c97632e6b580957164599d4"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.213925 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" event={"ID":"45fc5ef3-7c5f-4920-8509-a4566b3e3c7d","Type":"ContainerStarted","Data":"83ef4b62dd5c47854dc31c5c3309b50a7a507729a6fbd23875463472fb6846fb"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.217795 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" event={"ID":"d58a04f0-dcce-4a15-9248-06fe40d8fceb","Type":"ContainerStarted","Data":"0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.219179 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.222649 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" event={"ID":"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0","Type":"ContainerStarted","Data":"026e49beb62a05f9c0d2c1ea0dcac513979507dd4cbb4e1c765bb9b7ae74ec81"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.225098 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.225770 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.725754287 +0000 UTC m=+154.642737649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.237723 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" event={"ID":"1e7aaeee-4486-42e5-be43-cdc4d23aa445","Type":"ContainerStarted","Data":"92562026eb06e28a4010c42295fa8ac039a06a880260dbf94975ab7c38b04ca8"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.262235 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mp4ng" event={"ID":"86bad83e-cde9-43a8-803a-fda0e14ef559","Type":"ContainerStarted","Data":"6eb36b608bfc487f14d967db87f59a5851c3239d91d579c4e1b5f81175f9df33"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.264531 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" event={"ID":"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759","Type":"ContainerStarted","Data":"595a32c30272da94613a54eb329de4c79c646bea87399b9ea79ef7045e751ee6"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.267649 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" event={"ID":"72e9e13e-3775-4751-9b9c-466f114cff18","Type":"ContainerStarted","Data":"7a09e867af4e521431fc4a5a165acdc2d94b30d1a636a495532efe238f3f1430"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.271293 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" event={"ID":"071ceb07-cd7e-43a9-b9f0-c2ef0837f336","Type":"ContainerStarted","Data":"f6f8a050c0ddc31f5f98c48b6fb83881092b483e2d9040182b38ca15ecd6f726"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.288494 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" event={"ID":"18770b7d-cd23-4e8b-89e5-67986cfbad15","Type":"ContainerStarted","Data":"77d42e7191a09a88c3400a488d140721aa4f13b316b772bf6a80f7e9831c3ac7"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.288593 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9"] Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.290388 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" event={"ID":"a867a71a-121a-4f12-8c81-7b14f0a4fd16","Type":"ContainerStarted","Data":"a6aafb55998347c5ee6775773c1d2745c2950058a82cb28d6d8fe124da41bdcc"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.307129 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" event={"ID":"2ece62fa-00e6-4507-8fdd-ceca96eea6f9","Type":"ContainerStarted","Data":"47b19bae863b59f8dbe5a18ea1d12577cc4faf7c79562f7c3234506582e04203"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.307508 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" event={"ID":"2ece62fa-00e6-4507-8fdd-ceca96eea6f9","Type":"ContainerStarted","Data":"52dbb93581da2bbb1cd9d21c4ba11cf8b4662e9862782036f0ed79e158917f81"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.316415 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" event={"ID":"c4651fee-37da-4038-895d-4b483d41240e","Type":"ContainerStarted","Data":"4eb5a3b37f0f5c324e1e6301f0419a5d1912ecef55a497f908e1efa6e185f350"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.321067 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" event={"ID":"6c3520aa-b012-4e35-8336-6655ef28eae8","Type":"ContainerStarted","Data":"0bbb8e8ee6345a5049fa32f913d12b90ec8726f471099ffa64369d43a56139b1"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.332524 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.341515 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.841484235 +0000 UTC m=+154.758467607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.362007 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mx2n5"] Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.369289 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" podStartSLOduration=128.369251987 podStartE2EDuration="2m8.369251987s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:11.368445104 +0000 UTC m=+154.285428486" watchObservedRunningTime="2025-11-24 21:41:11.369251987 +0000 UTC m=+154.286235359" Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.374146 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gc994" event={"ID":"ba0198db-c2d9-4b09-bb3c-88f60a4382c1","Type":"ContainerStarted","Data":"0e99c2d3440e8c49bdacd7f2370e66f2c2e1ab8bac8baddd40b361dcf4f9f5f2"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.381627 4767 generic.go:334] "Generic (PLEG): container finished" podID="cd491358-4379-40eb-a9b1-285abcbeb89c" containerID="40df04d5e126684b4f1d939180b4116d16dd3b0249c3bc57250e0fcda026cd3f" exitCode=0 Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.381718 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" event={"ID":"cd491358-4379-40eb-a9b1-285abcbeb89c","Type":"ContainerDied","Data":"40df04d5e126684b4f1d939180b4116d16dd3b0249c3bc57250e0fcda026cd3f"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.381741 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" event={"ID":"cd491358-4379-40eb-a9b1-285abcbeb89c","Type":"ContainerStarted","Data":"52559fe17bb30953ec80194e8edbd75d4bc31f96aaf0ae7991ed008c15184510"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.387146 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" event={"ID":"ed6be1b3-7da9-4f00-b7ed-3570e02210ca","Type":"ContainerStarted","Data":"850fa6f4d20ae598ab261b41c356ac71353cd7ff3727a8f65fd95f5c5aacce50"} Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.394463 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.419474 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.419516 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.437868 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.439245 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:11.939226251 +0000 UTC m=+154.856209633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.509018 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.539861 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.541873 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.041853556 +0000 UTC m=+154.958836928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.641514 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.641772 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.141728043 +0000 UTC m=+155.058711415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.642141 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.642713 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.14270099 +0000 UTC m=+155.059684362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.743566 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.743929 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.243909655 +0000 UTC m=+155.160893027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.787888 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2"] Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.848781 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.849452 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.349435312 +0000 UTC m=+155.266418684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.949667 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.950022 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.449975968 +0000 UTC m=+155.366959350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:11 crc kubenswrapper[4767]: I1124 21:41:11.950094 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:11 crc kubenswrapper[4767]: E1124 21:41:11.950447 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.450431991 +0000 UTC m=+155.367415363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.060349 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.060740 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.560724934 +0000 UTC m=+155.477708296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.078077 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nm78c" podStartSLOduration=129.078055048 podStartE2EDuration="2m9.078055048s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:12.055019941 +0000 UTC m=+154.972003303" watchObservedRunningTime="2025-11-24 21:41:12.078055048 +0000 UTC m=+154.995038420" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.080557 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.154547 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.161880 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.163882 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.663859743 +0000 UTC m=+155.580843115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.183692 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n8vvb"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.219325 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qdgxl"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.228927 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.230404 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-29qkn"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.259732 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.263494 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.264754 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.764730508 +0000 UTC m=+155.681713880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.293640 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mp4ng" podStartSLOduration=129.293617692 podStartE2EDuration="2m9.293617692s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:12.289865805 +0000 UTC m=+155.206849197" watchObservedRunningTime="2025-11-24 21:41:12.293617692 +0000 UTC m=+155.210601064" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.366661 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.368859 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.868831075 +0000 UTC m=+155.785814447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.388660 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qcv2q"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.388744 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.388769 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpc7v"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.388786 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.398927 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:12 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:12 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:12 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.399288 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.399883 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr"] Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.411218 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" event={"ID":"1e7aaeee-4486-42e5-be43-cdc4d23aa445","Type":"ContainerStarted","Data":"3da43896ab0665c96b505767726cb41deb9d735df70a449659bb1d95ea60f2e4"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.412773 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.416063 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-whszl" podStartSLOduration=130.416040031 podStartE2EDuration="2m10.416040031s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:12.415766233 +0000 UTC m=+155.332749605" watchObservedRunningTime="2025-11-24 21:41:12.416040031 +0000 UTC m=+155.333023403" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.427734 4767 patch_prober.go:28] interesting pod/console-operator-58897d9998-nlpg6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.427803 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" podUID="1e7aaeee-4486-42e5-be43-cdc4d23aa445" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 24 21:41:12 crc kubenswrapper[4767]: W1124 21:41:12.430756 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod863df8e8_3e7f_4d7e_bb01_c63359a9024c.slice/crio-11712fcee9a6b7021582cb27a78eee5df2f5da05099c57f81d2264ab482fb1c4 WatchSource:0}: Error finding container 11712fcee9a6b7021582cb27a78eee5df2f5da05099c57f81d2264ab482fb1c4: Status 404 returned error can't find the container with id 11712fcee9a6b7021582cb27a78eee5df2f5da05099c57f81d2264ab482fb1c4 Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.434736 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6fxbb" event={"ID":"b1883436-188c-45d6-b63b-45cdc822fe99","Type":"ContainerStarted","Data":"5b4de14c00e2114ebf9439cee6be3d6e528074f9bc8e4d847b1d93ea32c0ff3e"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.434788 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6fxbb" event={"ID":"b1883436-188c-45d6-b63b-45cdc822fe99","Type":"ContainerStarted","Data":"263b7dafef4f18a5d1e630e7d3c19cd67a632de5d21835dcccdd2ef46257690c"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.453001 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" event={"ID":"f70726f8-befa-4ac3-8157-01c02fd1b2f1","Type":"ContainerStarted","Data":"806f784d4263b55079ece745603ad7fdd3b7358fa09ffcd693a9d8ccc264e365"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.456326 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gc994" podStartSLOduration=129.456308538 podStartE2EDuration="2m9.456308538s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:12.453372535 +0000 UTC m=+155.370355917" watchObservedRunningTime="2025-11-24 21:41:12.456308538 +0000 UTC m=+155.373291910" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.458418 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" event={"ID":"72e9e13e-3775-4751-9b9c-466f114cff18","Type":"ContainerStarted","Data":"03196d54a6028ecd68f8c7b060979f588b7056e0abf1e1098c06833a4c875e05"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.479805 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.480092 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:12.980048475 +0000 UTC m=+155.897031847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.499827 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" event={"ID":"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759","Type":"ContainerStarted","Data":"e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.500416 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.507751 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" event={"ID":"84c8e36e-716b-42d4-92f2-21540ab8568a","Type":"ContainerStarted","Data":"4166ffd6b2db46b2933b83171b040d5710897f422e6ac4bec1f00506eadb1536"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.510467 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" event={"ID":"a867a71a-121a-4f12-8c81-7b14f0a4fd16","Type":"ContainerStarted","Data":"4dfd885725f73e24579471dfc22f1a7d7c34398436ce52a73474f3e9ea70aca3"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.512904 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wrbrz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.514050 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" podUID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.514171 4767 generic.go:334] "Generic (PLEG): container finished" podID="6c3520aa-b012-4e35-8336-6655ef28eae8" containerID="23a98116d894a4ac82dd7776716977fc19432f09be9369c845e6e150a661a831" exitCode=0 Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.514233 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" event={"ID":"6c3520aa-b012-4e35-8336-6655ef28eae8","Type":"ContainerDied","Data":"23a98116d894a4ac82dd7776716977fc19432f09be9369c845e6e150a661a831"} Nov 24 21:41:12 crc kubenswrapper[4767]: W1124 21:41:12.515085 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5cd5d9_8313_4279_91eb_74a4b5c525e8.slice/crio-2505a41cdb8cea3c24573d754f3c73b2165fb0f9773b779d5b2b9caa71138d14 WatchSource:0}: Error finding container 2505a41cdb8cea3c24573d754f3c73b2165fb0f9773b779d5b2b9caa71138d14: Status 404 returned error can't find the container with id 2505a41cdb8cea3c24573d754f3c73b2165fb0f9773b779d5b2b9caa71138d14 Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.520293 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" event={"ID":"cba6b5e8-eb8a-40fb-b684-c7f08ef491c5","Type":"ContainerStarted","Data":"9b92be378962b43e6ed55cf0d3fb6f2681b30f7a44af4d2f43508678e1a63101"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.525593 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" event={"ID":"c4651fee-37da-4038-895d-4b483d41240e","Type":"ContainerStarted","Data":"1e92c7ddb953cb8fdb4841c8c2ea6e74c4a2ee5862e51826624d84e561748a81"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.530569 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" event={"ID":"c88eb915-2203-4a33-ba3e-ba039aa01296","Type":"ContainerStarted","Data":"6db2e5fcf087bd95fc60c25544a75e81d2ab3cdc3cee7fca281e3b541241e8dd"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.536900 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" event={"ID":"ed6be1b3-7da9-4f00-b7ed-3570e02210ca","Type":"ContainerStarted","Data":"855bb652f4398f1bd25cb12acb84a1683f0c58d8e33c584563900d858febc732"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.579646 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" event={"ID":"18770b7d-cd23-4e8b-89e5-67986cfbad15","Type":"ContainerStarted","Data":"39d7d2d776ab26eaca449cd7f9f52273a60797e5009566b019e8ed829d6df461"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.582434 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mv2hv" podStartSLOduration=129.582417992 podStartE2EDuration="2m9.582417992s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:12.581836456 +0000 UTC m=+155.498819848" watchObservedRunningTime="2025-11-24 21:41:12.582417992 +0000 UTC m=+155.499401364" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.585849 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.586862 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.086850029 +0000 UTC m=+156.003833401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.612392 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" event={"ID":"2201399d-776a-43fb-94cc-c288a6dae7df","Type":"ContainerStarted","Data":"a47f88eb2021a8dc2bee61581f024295e26939dc8cd3f63148d75d533929fa6c"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.650792 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" event={"ID":"311b014f-099c-4f63-a46e-ccf2684847db","Type":"ContainerStarted","Data":"3c806da595f81eecc06992dfe5aa67a41ece2878eec9897ab7622e7aeadbae4f"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.651722 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.670427 4767 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x76bn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.670492 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" podUID="311b014f-099c-4f63-a46e-ccf2684847db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.671409 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fk657" event={"ID":"3e329e7d-cfae-4b82-8864-5166dce6a68d","Type":"ContainerStarted","Data":"bc299330ebfe4244f59be9c9638426fbe054ad094c1b813f3d5be9124e758c67"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.671443 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fk657" event={"ID":"3e329e7d-cfae-4b82-8864-5166dce6a68d","Type":"ContainerStarted","Data":"51d5e4ccec1f271c017bccbb2f00bdef5260dda0483921d702c7c7786615ce9e"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.679879 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" event={"ID":"9639809f-913e-44e8-91db-731add21e1a4","Type":"ContainerStarted","Data":"1ec74b39cbd3670f5343809646b2a011f4f02cc4094bf16ce007f36cff50a5bc"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.680959 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" event={"ID":"207b1355-917a-4e05-b680-45f50ec116dd","Type":"ContainerStarted","Data":"bc73d7015c97cc8f562b225073771eacb86e2c4456e6e0239a674ddfce5e29c6"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.681624 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" event={"ID":"1b38e529-9f0b-443d-b320-60935a568f07","Type":"ContainerStarted","Data":"9a435dc875a4e5c49a9efca8fc3b8d9d68d964b15a94131c11b69b4e485092f6"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.682605 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" event={"ID":"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0","Type":"ContainerStarted","Data":"fd6b4d8333c8af90d7512e5dafd5dfcf254db252c5a357f4b26870d57ec9e915"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.691752 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.693001 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.192986934 +0000 UTC m=+156.109970306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.722647 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" event={"ID":"071ceb07-cd7e-43a9-b9f0-c2ef0837f336","Type":"ContainerStarted","Data":"171d275cc065e15560c8e4383061ff759c2538c9074f201db6792f52bc5c0928"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.740984 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4d8cc" event={"ID":"6b463b5d-b072-4032-aa46-9abe955f901b","Type":"ContainerStarted","Data":"42b596d7d906230dbb19bd18841f9d6cf79a09f7a72376439860d36ffa9e27f0"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.743374 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4d8cc" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.747247 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-4d8cc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.747331 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4d8cc" podUID="6b463b5d-b072-4032-aa46-9abe955f901b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.795309 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.796146 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.296128612 +0000 UTC m=+156.213111994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.798654 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mp4ng" event={"ID":"86bad83e-cde9-43a8-803a-fda0e14ef559","Type":"ContainerStarted","Data":"c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.814547 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" podStartSLOduration=129.814518186 podStartE2EDuration="2m9.814518186s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:12.813853317 +0000 UTC m=+155.730836689" watchObservedRunningTime="2025-11-24 21:41:12.814518186 +0000 UTC m=+155.731501568" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.815740 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" event={"ID":"8c0bd833-4b37-400e-8394-e8311efb343b","Type":"ContainerStarted","Data":"c989f45d4cebe237ce033283ea7d7aacd72e5cce405b2ac7ca89804836b0299c"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.819190 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qdgxl" event={"ID":"8788fcfc-dcff-417e-af1b-1a0938543820","Type":"ContainerStarted","Data":"5cb8b8391c114192a4a321458943645bdff3a725776ab0dff2208006b856588e"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.825943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gc994" event={"ID":"ba0198db-c2d9-4b09-bb3c-88f60a4382c1","Type":"ContainerStarted","Data":"fb7be838f3ae2bb161a407ed4296d14124e2c5a76d70bbef606fe58d47fa034e"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.844906 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" event={"ID":"0d541163-dd1b-4486-9939-4eaa9ec350bf","Type":"ContainerStarted","Data":"3afc4db157baa32eb6903937e6c8ecae4919f09e8904bfb025429e7c4fa5c408"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.855756 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" event={"ID":"d5e859ee-0d8c-48c7-8251-25c79a040f99","Type":"ContainerStarted","Data":"d72ef655361516f36a0cec7ff3f35a878b5d1eba9590e2d6ef72f84e46f0a818"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.856897 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.866115 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" event={"ID":"e8d6ce66-68d1-45fd-9e54-6baedf990e1d","Type":"ContainerStarted","Data":"202a761cf7edf3e630f300787922971937034feb890018631ec197435d360b2f"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.876082 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" event={"ID":"b0934816-1e19-4894-a691-f3e53551062a","Type":"ContainerStarted","Data":"0b72810aa97e649d31b9d71f00863f26f8f6bc428e0743f6a3f86cd3e85f3e28"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.878356 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" event={"ID":"45b0fcf9-821d-4504-acf3-2d1cfb83d093","Type":"ContainerStarted","Data":"25bb3ebb44f5ee3e3badb1358de0523a404dc3c0647a5fbfb5e8136d180af4d5"} Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.890006 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-h9pjp" podStartSLOduration=129.889989717 podStartE2EDuration="2m9.889989717s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:12.889795862 +0000 UTC m=+155.806779234" watchObservedRunningTime="2025-11-24 21:41:12.889989717 +0000 UTC m=+155.806973089" Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.896435 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.896687 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.396662928 +0000 UTC m=+156.313646290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:12 crc kubenswrapper[4767]: I1124 21:41:12.899955 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:12 crc kubenswrapper[4767]: E1124 21:41:12.901327 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.40131468 +0000 UTC m=+156.318298052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.001215 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.003044 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.503023459 +0000 UTC m=+156.420006831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.023944 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" podStartSLOduration=130.023927345 podStartE2EDuration="2m10.023927345s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.011413888 +0000 UTC m=+155.928397250" watchObservedRunningTime="2025-11-24 21:41:13.023927345 +0000 UTC m=+155.940910717" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.109204 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.109729 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.60971272 +0000 UTC m=+156.526696092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.135383 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" podStartSLOduration=130.13535422 podStartE2EDuration="2m10.13535422s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.132217671 +0000 UTC m=+156.049201053" watchObservedRunningTime="2025-11-24 21:41:13.13535422 +0000 UTC m=+156.052337592" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.210405 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.219508 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.719468028 +0000 UTC m=+156.636451400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.312482 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.312764 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.812753516 +0000 UTC m=+156.729736888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.412164 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:13 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:13 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:13 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.412592 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.413202 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.413549 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:13.913537049 +0000 UTC m=+156.830520421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.420531 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-49ppc" podStartSLOduration=130.420511657 podStartE2EDuration="2m10.420511657s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.41988446 +0000 UTC m=+156.336867832" watchObservedRunningTime="2025-11-24 21:41:13.420511657 +0000 UTC m=+156.337495029" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.469355 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6fxbb" podStartSLOduration=6.469327309 podStartE2EDuration="6.469327309s" podCreationTimestamp="2025-11-24 21:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.461878026 +0000 UTC m=+156.378861398" watchObservedRunningTime="2025-11-24 21:41:13.469327309 +0000 UTC m=+156.386310691" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.516825 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l49k7" podStartSLOduration=131.516805012 podStartE2EDuration="2m11.516805012s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.51465314 +0000 UTC m=+156.431636512" watchObservedRunningTime="2025-11-24 21:41:13.516805012 +0000 UTC m=+156.433788384" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.518261 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.518685 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.018662665 +0000 UTC m=+156.935646037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.584365 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fk657" podStartSLOduration=6.5843409170000005 podStartE2EDuration="6.584340917s" podCreationTimestamp="2025-11-24 21:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.584052588 +0000 UTC m=+156.501035960" watchObservedRunningTime="2025-11-24 21:41:13.584340917 +0000 UTC m=+156.501324289" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.593158 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4d8cc" podStartSLOduration=130.593134417 podStartE2EDuration="2m10.593134417s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.56199933 +0000 UTC m=+156.478982702" watchObservedRunningTime="2025-11-24 21:41:13.593134417 +0000 UTC m=+156.510117789" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.627182 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.632374 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.132344595 +0000 UTC m=+157.049327967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.632511 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.632887 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.13287961 +0000 UTC m=+157.049862982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.656722 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zck7s" podStartSLOduration=130.656706609 podStartE2EDuration="2m10.656706609s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.61568346 +0000 UTC m=+156.532666832" watchObservedRunningTime="2025-11-24 21:41:13.656706609 +0000 UTC m=+156.573689981" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.666054 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" podStartSLOduration=131.666029485 podStartE2EDuration="2m11.666029485s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.653897529 +0000 UTC m=+156.570880901" watchObservedRunningTime="2025-11-24 21:41:13.666029485 +0000 UTC m=+156.583012857" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.733066 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.733474 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.233454836 +0000 UTC m=+157.150438208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.769145 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gmxsv" podStartSLOduration=131.769127913 podStartE2EDuration="2m11.769127913s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.767558768 +0000 UTC m=+156.684542140" watchObservedRunningTime="2025-11-24 21:41:13.769127913 +0000 UTC m=+156.686111285" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.770704 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" podStartSLOduration=131.770695248 podStartE2EDuration="2m11.770695248s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.707059974 +0000 UTC m=+156.624043366" watchObservedRunningTime="2025-11-24 21:41:13.770695248 +0000 UTC m=+156.687678630" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.788297 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" podStartSLOduration=130.788249248 podStartE2EDuration="2m10.788249248s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.787938889 +0000 UTC m=+156.704922251" watchObservedRunningTime="2025-11-24 21:41:13.788249248 +0000 UTC m=+156.705232620" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.818252 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" podStartSLOduration=130.818234943 podStartE2EDuration="2m10.818234943s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.816591646 +0000 UTC m=+156.733575018" watchObservedRunningTime="2025-11-24 21:41:13.818234943 +0000 UTC m=+156.735218315" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.839639 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.840179 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.340160088 +0000 UTC m=+157.257143460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.897368 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" event={"ID":"571ff756-8e0a-4959-9dcb-b2c9aff1e7c0","Type":"ContainerStarted","Data":"9a1e26dda3e39afdd458f885ff717ffde00ab63ffec45d1ec70717584c7332cb"} Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.912811 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" event={"ID":"0a0c5d70-78fa-42c1-9e79-745b42839d04","Type":"ContainerStarted","Data":"04075b9f1f2d922646375b0ea0c09ae956cce16bdbaad04d40fc5cac4e238400"} Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.912875 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" event={"ID":"0a0c5d70-78fa-42c1-9e79-745b42839d04","Type":"ContainerStarted","Data":"d74e521545e3b22ae6acf8eb24d85331d79a6bf549162924a60613580bfde6b2"} Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.913970 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.927081 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-f6j64" podStartSLOduration=131.927054444 podStartE2EDuration="2m11.927054444s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.923642797 +0000 UTC m=+156.840626169" watchObservedRunningTime="2025-11-24 21:41:13.927054444 +0000 UTC m=+156.844037826" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.930627 4767 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fpc7v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.930687 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.944487 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:13 crc kubenswrapper[4767]: E1124 21:41:13.944820 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.44480493 +0000 UTC m=+157.361788302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.952007 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" podStartSLOduration=130.951987205 podStartE2EDuration="2m10.951987205s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.950643366 +0000 UTC m=+156.867626748" watchObservedRunningTime="2025-11-24 21:41:13.951987205 +0000 UTC m=+156.868970577" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.969246 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" event={"ID":"9639809f-913e-44e8-91db-731add21e1a4","Type":"ContainerStarted","Data":"b26c2e6a8b4a9d0ef109071a386aeb8c5c51f8b0150f3257d3e188bd17270802"} Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.987546 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qgfw9" podStartSLOduration=130.987523927 podStartE2EDuration="2m10.987523927s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:13.985814449 +0000 UTC m=+156.902797811" watchObservedRunningTime="2025-11-24 21:41:13.987523927 +0000 UTC m=+156.904507299" Nov 24 21:41:13 crc kubenswrapper[4767]: I1124 21:41:13.997393 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" event={"ID":"6c3520aa-b012-4e35-8336-6655ef28eae8","Type":"ContainerStarted","Data":"4ff75c7eff5fad8b883e7704b9934b62989d545b0bd1ea941069230e4f070fe0"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.011771 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" event={"ID":"207b1355-917a-4e05-b680-45f50ec116dd","Type":"ContainerStarted","Data":"1238a94ddd7caca9f43f622e6d9a1a6af1554f6fc7e342cbf66b5b4f9ec3611c"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.023701 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" event={"ID":"2201399d-776a-43fb-94cc-c288a6dae7df","Type":"ContainerStarted","Data":"9789db79e4ced762f56a1461e6eb147abbf6587fa4b56640fbc2af5e8b83c633"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.024279 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.039647 4767 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8r9jr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.039701 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" podUID="2201399d-776a-43fb-94cc-c288a6dae7df" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.040297 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" podStartSLOduration=131.040287791 podStartE2EDuration="2m11.040287791s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.040009043 +0000 UTC m=+156.956992415" watchObservedRunningTime="2025-11-24 21:41:14.040287791 +0000 UTC m=+156.957271163" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.042776 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" event={"ID":"b0934816-1e19-4894-a691-f3e53551062a","Type":"ContainerStarted","Data":"190d640e1bfc027105ca4e59f647df3347b3a6c15de52228a086112447438f1d"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.045367 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.045415 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" event={"ID":"72e9e13e-3775-4751-9b9c-466f114cff18","Type":"ContainerStarted","Data":"9a5cea23232337486836bcb1cc4474f0612f009192211c07a07072a874278997"} Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.046454 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.546442907 +0000 UTC m=+157.463426279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.054616 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mssg2" event={"ID":"e8d6ce66-68d1-45fd-9e54-6baedf990e1d","Type":"ContainerStarted","Data":"73df1aeb2e8d9bed46cb5ef7d296b04efd3ff2759f10fcf2172f674060259c9f"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.071685 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" podStartSLOduration=131.071667165 podStartE2EDuration="2m11.071667165s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.07146168 +0000 UTC m=+156.988445052" watchObservedRunningTime="2025-11-24 21:41:14.071667165 +0000 UTC m=+156.988650537" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.101579 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" event={"ID":"cd491358-4379-40eb-a9b1-285abcbeb89c","Type":"ContainerStarted","Data":"f573c7e1e49ced5eca73c34e01fbe48a45814627d8dfa4442cb51e2227d7c9ba"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.105601 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rs9p9" event={"ID":"c88eb915-2203-4a33-ba3e-ba039aa01296","Type":"ContainerStarted","Data":"599d0e91c7f2cf53630f0da9cc2fc5285546aeccc8ebead77791a93ad58b0b93"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.115990 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-f7dpz" podStartSLOduration=131.115971188 podStartE2EDuration="2m11.115971188s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.11321808 +0000 UTC m=+157.030201452" watchObservedRunningTime="2025-11-24 21:41:14.115971188 +0000 UTC m=+157.032954560" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.116563 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" podStartSLOduration=131.116558005 podStartE2EDuration="2m11.116558005s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.095227557 +0000 UTC m=+157.012210929" watchObservedRunningTime="2025-11-24 21:41:14.116558005 +0000 UTC m=+157.033541367" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.120844 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" event={"ID":"8c0bd833-4b37-400e-8394-e8311efb343b","Type":"ContainerStarted","Data":"f034645f77ffcffb12a6e7719933c8662d09ca6c3b6738328bacf2c686aa57f9"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.128596 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" event={"ID":"0f5cd5d9-8313-4279-91eb-74a4b5c525e8","Type":"ContainerStarted","Data":"c03af116aa1ad2cb1c2a6d8b346e34c3c9528fe932916770e43a10596fd75a0d"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.128796 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" event={"ID":"0f5cd5d9-8313-4279-91eb-74a4b5c525e8","Type":"ContainerStarted","Data":"2505a41cdb8cea3c24573d754f3c73b2165fb0f9773b779d5b2b9caa71138d14"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.129509 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.140053 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qdgxl" event={"ID":"8788fcfc-dcff-417e-af1b-1a0938543820","Type":"ContainerStarted","Data":"a4c20db5449a6ec43dbaed05c7b0ba7aee59ef4396ed220170d1c9b370fcf8ba"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.140389 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qdgxl" event={"ID":"8788fcfc-dcff-417e-af1b-1a0938543820","Type":"ContainerStarted","Data":"49380eec12eda28cbf430d2189ba10b2f9d02f594d9ffb61660c291f9865f0db"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.140756 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.147295 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.147602 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.647575869 +0000 UTC m=+157.564559241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.154665 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" podStartSLOduration=131.1546469 podStartE2EDuration="2m11.1546469s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.149616037 +0000 UTC m=+157.066599409" watchObservedRunningTime="2025-11-24 21:41:14.1546469 +0000 UTC m=+157.071630272" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.170047 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" event={"ID":"a867a71a-121a-4f12-8c81-7b14f0a4fd16","Type":"ContainerStarted","Data":"abec3a5b3d579bbd58943d89bbb599579d1d78e716334eb6ab3f560a7be31889"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.198310 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qdgxl" podStartSLOduration=7.198289374 podStartE2EDuration="7.198289374s" podCreationTimestamp="2025-11-24 21:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.190177633 +0000 UTC m=+157.107161005" watchObservedRunningTime="2025-11-24 21:41:14.198289374 +0000 UTC m=+157.115272746" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.202596 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" event={"ID":"45b0fcf9-821d-4504-acf3-2d1cfb83d093","Type":"ContainerStarted","Data":"a6012aeef4faf7ca54eff95e74dbf94be2812143476c4ffc2893252abb56947f"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.203344 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" event={"ID":"45b0fcf9-821d-4504-acf3-2d1cfb83d093","Type":"ContainerStarted","Data":"eb49fb7a6206882732b6afd132d764dfa873d2cd5404bb99430a4bad16a3d5ef"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.213840 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" podStartSLOduration=131.213820687 podStartE2EDuration="2m11.213820687s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.212992693 +0000 UTC m=+157.129976065" watchObservedRunningTime="2025-11-24 21:41:14.213820687 +0000 UTC m=+157.130804059" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.217653 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" event={"ID":"543e4218-8da0-43fe-bf43-1ec803edcc30","Type":"ContainerStarted","Data":"ec9ca74680ad5d522fce18868b85d4b26a9cb3216d9427cf4abe099f95a03a0d"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.217707 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" event={"ID":"543e4218-8da0-43fe-bf43-1ec803edcc30","Type":"ContainerStarted","Data":"5dc713aa57f494f6172aa7fd864c0e1c59b0b58410651b7dda13c12294ff6f1e"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.218550 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.230158 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6b8mk" event={"ID":"8b097a05-812b-4417-9410-fef3f70a193f","Type":"ContainerStarted","Data":"339d5befcb75bd064ca767b2986eef66578318c61e7deb2e95c286c279afeb01"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.233515 4767 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9mbml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.233648 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" podUID="543e4218-8da0-43fe-bf43-1ec803edcc30" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.241095 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" event={"ID":"ed6be1b3-7da9-4f00-b7ed-3570e02210ca","Type":"ContainerStarted","Data":"457ac59fd0252b08cb4e8b11f63c6e2d25acd04a097a1cdffca9538a2558b420"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.249232 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.249530 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.749518164 +0000 UTC m=+157.666501536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.250992 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" event={"ID":"1b38e529-9f0b-443d-b320-60935a568f07","Type":"ContainerStarted","Data":"c5d6492bb090e9353a7a7923e74467891931f50e43d8b325fa7f6e1d772a1724"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.254062 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mx2n5" podStartSLOduration=131.254043843 podStartE2EDuration="2m11.254043843s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.251747758 +0000 UTC m=+157.168731130" watchObservedRunningTime="2025-11-24 21:41:14.254043843 +0000 UTC m=+157.171027215" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.258077 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" event={"ID":"863df8e8-3e7f-4d7e-bb01-c63359a9024c","Type":"ContainerStarted","Data":"11712fcee9a6b7021582cb27a78eee5df2f5da05099c57f81d2264ab482fb1c4"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.263343 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" event={"ID":"f70726f8-befa-4ac3-8157-01c02fd1b2f1","Type":"ContainerStarted","Data":"b9c23fc7743f1bf799df34ded4a473494eb5deb49f7251c30d6861f57138baf2"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.264180 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.270480 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" event={"ID":"a40305b2-c53d-4aa0-8b36-80485e145c46","Type":"ContainerStarted","Data":"57c7d37b8784473aef987165bad8b6adf36bd47c01fb81479634f0fcad86165b"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.270538 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" event={"ID":"a40305b2-c53d-4aa0-8b36-80485e145c46","Type":"ContainerStarted","Data":"76a1840642d03a1ee39a42f3088a511fc902fba4ea1fb1b05c0a5b16e153a086"} Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.271620 4767 patch_prober.go:28] interesting pod/console-operator-58897d9998-nlpg6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.271660 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" podUID="1e7aaeee-4486-42e5-be43-cdc4d23aa445" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.272013 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-4d8cc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.272060 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4d8cc" podUID="6b463b5d-b072-4032-aa46-9abe955f901b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.272862 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wrbrz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.272894 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" podUID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.276378 4767 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-h9jtq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.276427 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" podUID="f70726f8-befa-4ac3-8157-01c02fd1b2f1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.337059 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" podStartSLOduration=131.337038349 podStartE2EDuration="2m11.337038349s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.334150326 +0000 UTC m=+157.251133698" watchObservedRunningTime="2025-11-24 21:41:14.337038349 +0000 UTC m=+157.254021721" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.337802 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5xcvh" podStartSLOduration=131.33779698 podStartE2EDuration="2m11.33779698s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.300061935 +0000 UTC m=+157.217045307" watchObservedRunningTime="2025-11-24 21:41:14.33779698 +0000 UTC m=+157.254780352" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.350591 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.352285 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.852245252 +0000 UTC m=+157.769228624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.407524 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:14 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:14 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:14 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.407582 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.452758 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.453237 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.95321916 +0000 UTC m=+157.870202532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.495812 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pv7sg" podStartSLOduration=131.495796713 podStartE2EDuration="2m11.495796713s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.495591057 +0000 UTC m=+157.412574449" watchObservedRunningTime="2025-11-24 21:41:14.495796713 +0000 UTC m=+157.412780085" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.496038 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-29qkn" podStartSLOduration=131.49603469 podStartE2EDuration="2m11.49603469s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.373823507 +0000 UTC m=+157.290806879" watchObservedRunningTime="2025-11-24 21:41:14.49603469 +0000 UTC m=+157.413018062" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.517824 4767 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x76bn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 21:41:14 crc kubenswrapper[4767]: [+]log ok Nov 24 21:41:14 crc kubenswrapper[4767]: [+]poststarthook/max-in-flight-filter ok Nov 24 21:41:14 crc kubenswrapper[4767]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 21:41:14 crc kubenswrapper[4767]: [-]poststarthook/openshift.io-StartUserInformer failed: reason withheld Nov 24 21:41:14 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.517883 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" podUID="311b014f-099c-4f63-a46e-ccf2684847db" containerName="oauth-openshift" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.553790 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.553944 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" podStartSLOduration=131.55392679 podStartE2EDuration="2m11.55392679s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.552051587 +0000 UTC m=+157.469034959" watchObservedRunningTime="2025-11-24 21:41:14.55392679 +0000 UTC m=+157.470910152" Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.554101 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.054082815 +0000 UTC m=+157.971066187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.585020 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l2gbt" podStartSLOduration=131.585000976 podStartE2EDuration="2m11.585000976s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:14.583903904 +0000 UTC m=+157.500887276" watchObservedRunningTime="2025-11-24 21:41:14.585000976 +0000 UTC m=+157.501984348" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.655653 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.655932 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.155921457 +0000 UTC m=+158.072904829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.756685 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.756942 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.256914845 +0000 UTC m=+158.173898217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.757193 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.757481 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.257469421 +0000 UTC m=+158.174452783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.838008 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.838054 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.858579 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.858704 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.358687016 +0000 UTC m=+158.275670388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.858797 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.859111 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.359101578 +0000 UTC m=+158.276084950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.959790 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.959968 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.459943382 +0000 UTC m=+158.376926754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:14 crc kubenswrapper[4767]: I1124 21:41:14.960095 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:14 crc kubenswrapper[4767]: E1124 21:41:14.960414 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.460400165 +0000 UTC m=+158.377383537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.061504 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.061650 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.561617279 +0000 UTC m=+158.478600651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.061784 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.062118 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.562109293 +0000 UTC m=+158.479092665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.163033 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.163201 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.663177864 +0000 UTC m=+158.580161236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.163348 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.163688 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.663674578 +0000 UTC m=+158.580657950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.264046 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.264251 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.764220874 +0000 UTC m=+158.681204246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.274931 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" event={"ID":"863df8e8-3e7f-4d7e-bb01-c63359a9024c","Type":"ContainerStarted","Data":"ffb9520f63c9b864c5dab6e1125675b88651f44be920082d1f8c64745fb148d7"} Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.276169 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" event={"ID":"0f5cd5d9-8313-4279-91eb-74a4b5c525e8","Type":"ContainerStarted","Data":"a945db261cb87594d70ce3e471f37a9a82381f430b55b7c7b0af1d15856178a4"} Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.277468 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" event={"ID":"8c0bd833-4b37-400e-8394-e8311efb343b","Type":"ContainerStarted","Data":"f1e829158a1b794037b0747a2abe0cc02f2c3c49dca70eb8c7b0168818b5042b"} Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.279146 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" event={"ID":"6c3520aa-b012-4e35-8336-6655ef28eae8","Type":"ContainerStarted","Data":"bca497274f19dcf231f0522c1a29ed42544bc98f48d70a88eb86203bbbdf3e75"} Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.281244 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6h6nj" event={"ID":"207b1355-917a-4e05-b680-45f50ec116dd","Type":"ContainerStarted","Data":"7347e36856cd4ec2f98b3a68fc3c097845efcf012e00e9446b7fc0a639a15f43"} Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.282077 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-4d8cc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.282110 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4d8cc" podUID="6b463b5d-b072-4032-aa46-9abe955f901b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.282648 4767 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fpc7v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.282678 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.294928 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.303319 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.304150 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n8vvb" podStartSLOduration=132.304125841 podStartE2EDuration="2m12.304125841s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:15.301254019 +0000 UTC m=+158.218237391" watchObservedRunningTime="2025-11-24 21:41:15.304125841 +0000 UTC m=+158.221109223" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.337059 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9mbml" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.366198 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nf9x2" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.366473 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.369006 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" podStartSLOduration=133.36899639 podStartE2EDuration="2m13.36899639s" podCreationTimestamp="2025-11-24 21:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:15.337594295 +0000 UTC m=+158.254577667" watchObservedRunningTime="2025-11-24 21:41:15.36899639 +0000 UTC m=+158.285979762" Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.366705 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.866694944 +0000 UTC m=+158.783678316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.372755 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8r9jr" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.397539 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:15 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:15 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:15 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.397900 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.472703 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.473095 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:15.973062396 +0000 UTC m=+158.890045778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.573807 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.574392 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.074371533 +0000 UTC m=+158.991354905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.675317 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.675503 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.175470735 +0000 UTC m=+159.092454107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.675834 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.676212 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.176200715 +0000 UTC m=+159.093184177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.776870 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.777301 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.277283576 +0000 UTC m=+159.194266948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.833205 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nlpg6" Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.878364 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.879437 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.379415287 +0000 UTC m=+159.296398659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:15 crc kubenswrapper[4767]: I1124 21:41:15.978978 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:15 crc kubenswrapper[4767]: E1124 21:41:15.979303 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.479287503 +0000 UTC m=+159.396270875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.080568 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.080948 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.58093165 +0000 UTC m=+159.497915022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.132664 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.181148 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.181342 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.681316491 +0000 UTC m=+159.598299863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.181683 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.181978 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.681955059 +0000 UTC m=+159.598938431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.282207 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.282418 4767 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-h9jtq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.282509 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.782425703 +0000 UTC m=+159.699409075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.282495 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" podUID="f70726f8-befa-4ac3-8157-01c02fd1b2f1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.282804 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.283134 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.783118403 +0000 UTC m=+159.700101775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.287589 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" event={"ID":"863df8e8-3e7f-4d7e-bb01-c63359a9024c","Type":"ContainerStarted","Data":"bdcae08ed42b93fc0138e92268749bef4df74e37dd4109d9c837a7a655a8f62c"} Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.287632 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" event={"ID":"863df8e8-3e7f-4d7e-bb01-c63359a9024c","Type":"ContainerStarted","Data":"c1a8fce7dde1df3f748c4b53a2bf8d0726d82bf2b6546dfafbe09cb5d9e4eff4"} Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.289009 4767 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fpc7v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.289041 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.301413 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2c4l5" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.384487 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.384692 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.884665206 +0000 UTC m=+159.801648568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.397701 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:16 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:16 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:16 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.397760 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.489295 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.489743 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:16.98972604 +0000 UTC m=+159.906709412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.516780 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-h9jtq" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.575878 4767 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.592343 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.592471 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.092444227 +0000 UTC m=+160.009427599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.592620 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.592929 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.092917771 +0000 UTC m=+160.009901143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.693619 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.693818 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.193779486 +0000 UTC m=+160.110762858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.694197 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.694572 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.194561918 +0000 UTC m=+160.111545380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.795048 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.795384 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.295366861 +0000 UTC m=+160.212350233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.873973 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x6bsn"] Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.874958 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.876703 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.886905 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6bsn"] Nov 24 21:41:16 crc kubenswrapper[4767]: I1124 21:41:16.907831 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:16 crc kubenswrapper[4767]: E1124 21:41:16.908241 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.408221307 +0000 UTC m=+160.325204749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.009524 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.009715 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-utilities\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.009777 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw7fn\" (UniqueName: \"kubernetes.io/projected/dc8951ce-1595-45e8-a952-9629251645c1-kube-api-access-bw7fn\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.009798 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-catalog-content\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.009902 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.509886175 +0000 UTC m=+160.426869547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.080807 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rsd6"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.081810 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.100610 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.115465 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.115531 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-utilities\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.115600 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw7fn\" (UniqueName: \"kubernetes.io/projected/dc8951ce-1595-45e8-a952-9629251645c1-kube-api-access-bw7fn\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.115623 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-catalog-content\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.116077 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-catalog-content\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.116485 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-utilities\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.117029 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.617017178 +0000 UTC m=+160.534000550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.145691 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rsd6"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.156324 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw7fn\" (UniqueName: \"kubernetes.io/projected/dc8951ce-1595-45e8-a952-9629251645c1-kube-api-access-bw7fn\") pod \"community-operators-x6bsn\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.199114 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.217183 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.217345 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.717323797 +0000 UTC m=+160.634307179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.217485 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-utilities\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.217556 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.217589 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-catalog-content\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.217620 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljchs\" (UniqueName: \"kubernetes.io/projected/5749cc38-18d2-411b-b0e8-20dade9fbcfb-kube-api-access-ljchs\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.217904 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.717893353 +0000 UTC m=+160.634876725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.287715 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fj9vz"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.288848 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.315953 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fj9vz"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.318703 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.318890 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.818866351 +0000 UTC m=+160.735849713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.318976 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-utilities\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.319042 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.319071 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-catalog-content\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.319100 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljchs\" (UniqueName: \"kubernetes.io/projected/5749cc38-18d2-411b-b0e8-20dade9fbcfb-kube-api-access-ljchs\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.319500 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-catalog-content\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.319511 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.819492469 +0000 UTC m=+160.736475841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.319826 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-utilities\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.322634 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" event={"ID":"863df8e8-3e7f-4d7e-bb01-c63359a9024c","Type":"ContainerStarted","Data":"36e894021d08916f58920777d1834f22fdb766e0f6262f2e6a05565c4314c734"} Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.356946 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljchs\" (UniqueName: \"kubernetes.io/projected/5749cc38-18d2-411b-b0e8-20dade9fbcfb-kube-api-access-ljchs\") pod \"certified-operators-6rsd6\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.387000 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qcv2q" podStartSLOduration=10.386982862 podStartE2EDuration="10.386982862s" podCreationTimestamp="2025-11-24 21:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:17.385678035 +0000 UTC m=+160.302661407" watchObservedRunningTime="2025-11-24 21:41:17.386982862 +0000 UTC m=+160.303966244" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.405492 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:17 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:17 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:17 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.405545 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.422172 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.422498 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-catalog-content\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.422771 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnhx\" (UniqueName: \"kubernetes.io/projected/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-kube-api-access-bcnhx\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.422825 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-utilities\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.423574 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:41:17.923554944 +0000 UTC m=+160.840538316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.438422 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.473185 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zztbr"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.477810 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.483466 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zztbr"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.528097 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnhx\" (UniqueName: \"kubernetes.io/projected/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-kube-api-access-bcnhx\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.528131 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-utilities\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.528162 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.528191 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-catalog-content\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.528729 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-catalog-content\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: E1124 21:41:17.530492 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:41:18.030475452 +0000 UTC m=+160.947458824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck7c4" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.530740 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-utilities\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.540789 4767 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T21:41:16.575899996Z","Handler":null,"Name":""} Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.559383 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnhx\" (UniqueName: \"kubernetes.io/projected/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-kube-api-access-bcnhx\") pod \"community-operators-fj9vz\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.561123 4767 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.561148 4767 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.606181 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.629019 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.629315 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9br\" (UniqueName: \"kubernetes.io/projected/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-kube-api-access-tr9br\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.629339 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-utilities\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.629384 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-catalog-content\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.638691 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.666625 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6bsn"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.731294 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.731342 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9br\" (UniqueName: \"kubernetes.io/projected/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-kube-api-access-tr9br\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.731371 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-utilities\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.731413 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-catalog-content\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.731870 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-catalog-content\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.732139 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-utilities\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.743200 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rsd6"] Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.770345 4767 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.770384 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.776013 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9br\" (UniqueName: \"kubernetes.io/projected/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-kube-api-access-tr9br\") pod \"certified-operators-zztbr\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.802754 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.838729 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck7c4\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.891725 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:17 crc kubenswrapper[4767]: I1124 21:41:17.922393 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fj9vz"] Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.025094 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.026304 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.030854 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.031113 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.069292 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.140386 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/841bea93-8bc2-48e5-8e65-a98e32e934b4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.140457 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/841bea93-8bc2-48e5-8e65-a98e32e934b4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.146121 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zztbr"] Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.243953 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck7c4"] Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.244966 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/841bea93-8bc2-48e5-8e65-a98e32e934b4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.245040 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/841bea93-8bc2-48e5-8e65-a98e32e934b4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.245112 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/841bea93-8bc2-48e5-8e65-a98e32e934b4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.267885 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/841bea93-8bc2-48e5-8e65-a98e32e934b4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.320426 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.327433 4767 generic.go:334] "Generic (PLEG): container finished" podID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerID="b5502ec1af98346cce22fdef4c65fa04132525ff09410f31f5159863ae3b76b4" exitCode=0 Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.327533 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj9vz" event={"ID":"88e527cd-5ef0-49bd-bfde-7321ba67bb7e","Type":"ContainerDied","Data":"b5502ec1af98346cce22fdef4c65fa04132525ff09410f31f5159863ae3b76b4"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.327579 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj9vz" event={"ID":"88e527cd-5ef0-49bd-bfde-7321ba67bb7e","Type":"ContainerStarted","Data":"a21b6db139b985e34c46f96c0b2e515e5e62b0fa75c7913cc7e71a7773df2696"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.329103 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.330316 4767 generic.go:334] "Generic (PLEG): container finished" podID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerID="0a8f68b1d67b72f91685551c4f6eaf90e1ce2b401935e1004e140e254fdab2dc" exitCode=0 Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.330348 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rsd6" event={"ID":"5749cc38-18d2-411b-b0e8-20dade9fbcfb","Type":"ContainerDied","Data":"0a8f68b1d67b72f91685551c4f6eaf90e1ce2b401935e1004e140e254fdab2dc"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.330378 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rsd6" event={"ID":"5749cc38-18d2-411b-b0e8-20dade9fbcfb","Type":"ContainerStarted","Data":"a75fb89d3c59c6c2b1a543584259a250b347d0259d4e36541955838ea16955eb"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.335375 4767 generic.go:334] "Generic (PLEG): container finished" podID="dc8951ce-1595-45e8-a952-9629251645c1" containerID="a2d107a90e18ab111f38ec7b9946165ecc998d8a5ded07a26fa7a2181848a3af" exitCode=0 Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.335453 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6bsn" event={"ID":"dc8951ce-1595-45e8-a952-9629251645c1","Type":"ContainerDied","Data":"a2d107a90e18ab111f38ec7b9946165ecc998d8a5ded07a26fa7a2181848a3af"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.335486 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6bsn" event={"ID":"dc8951ce-1595-45e8-a952-9629251645c1","Type":"ContainerStarted","Data":"8f5633fe0dbb1bdaee5311a716db59a0e75792a0131b0c3b7f026a7b1e884a00"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.340259 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zztbr" event={"ID":"0dcd966d-f62e-4fa8-9f85-a99fa95cf673","Type":"ContainerStarted","Data":"9a855bcf5df3afc9b39cd8a8babced31248797f1efa02ce14c42e7a1ba7ade2d"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.341715 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" event={"ID":"04c820d8-acd5-42ce-8c38-7027eae3d43d","Type":"ContainerStarted","Data":"44a2644a54043ffd72531d9e2cd762d3c3e53bfcf8133864143c2967731eefeb"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.343380 4767 generic.go:334] "Generic (PLEG): container finished" podID="b0934816-1e19-4894-a691-f3e53551062a" containerID="190d640e1bfc027105ca4e59f647df3347b3a6c15de52228a086112447438f1d" exitCode=0 Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.343625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" event={"ID":"b0934816-1e19-4894-a691-f3e53551062a","Type":"ContainerDied","Data":"190d640e1bfc027105ca4e59f647df3347b3a6c15de52228a086112447438f1d"} Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.397466 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:18 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:18 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:18 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.397552 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.445321 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:18 crc kubenswrapper[4767]: I1124 21:41:18.657228 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:41:18 crc kubenswrapper[4767]: W1124 21:41:18.666546 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod841bea93_8bc2_48e5_8e65_a98e32e934b4.slice/crio-53ab9b7f3688ac9c4c6b6c906b4984b6b085f7cb73c3a972a8570c4eeb7df404 WatchSource:0}: Error finding container 53ab9b7f3688ac9c4c6b6c906b4984b6b085f7cb73c3a972a8570c4eeb7df404: Status 404 returned error can't find the container with id 53ab9b7f3688ac9c4c6b6c906b4984b6b085f7cb73c3a972a8570c4eeb7df404 Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.066559 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sn4hh"] Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.102796 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.106218 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.109914 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sn4hh"] Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.259523 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-utilities\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.259599 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-catalog-content\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.259619 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2hx\" (UniqueName: \"kubernetes.io/projected/4e342052-636d-42a3-a409-57cc627ec192-kube-api-access-4l2hx\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.352609 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"841bea93-8bc2-48e5-8e65-a98e32e934b4","Type":"ContainerStarted","Data":"2abf6487be754e03366b34dd4064c40e935ad8c70233fc73d876acdbe3a6b1b6"} Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.352659 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"841bea93-8bc2-48e5-8e65-a98e32e934b4","Type":"ContainerStarted","Data":"53ab9b7f3688ac9c4c6b6c906b4984b6b085f7cb73c3a972a8570c4eeb7df404"} Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.358030 4767 generic.go:334] "Generic (PLEG): container finished" podID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerID="698ca21b0874153360560fe99efc3800ddc5d650ec098c0234780f79e5648a46" exitCode=0 Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.358112 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zztbr" event={"ID":"0dcd966d-f62e-4fa8-9f85-a99fa95cf673","Type":"ContainerDied","Data":"698ca21b0874153360560fe99efc3800ddc5d650ec098c0234780f79e5648a46"} Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.360325 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-utilities\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.360407 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-catalog-content\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.360434 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l2hx\" (UniqueName: \"kubernetes.io/projected/4e342052-636d-42a3-a409-57cc627ec192-kube-api-access-4l2hx\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.361147 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-utilities\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.361214 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-catalog-content\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.362579 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" event={"ID":"04c820d8-acd5-42ce-8c38-7027eae3d43d","Type":"ContainerStarted","Data":"fdd60e36f4e6b452c4383406d2965886fe0c8870779408b49aed615c2f37447e"} Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.362690 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.374256 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.37423681 podStartE2EDuration="1.37423681s" podCreationTimestamp="2025-11-24 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:19.372675655 +0000 UTC m=+162.289659047" watchObservedRunningTime="2025-11-24 21:41:19.37423681 +0000 UTC m=+162.291220192" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.395395 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l2hx\" (UniqueName: \"kubernetes.io/projected/4e342052-636d-42a3-a409-57cc627ec192-kube-api-access-4l2hx\") pod \"redhat-marketplace-sn4hh\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.399514 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:19 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:19 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:19 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.399574 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.419660 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" podStartSLOduration=136.419642994 podStartE2EDuration="2m16.419642994s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:19.416488524 +0000 UTC m=+162.333471896" watchObservedRunningTime="2025-11-24 21:41:19.419642994 +0000 UTC m=+162.336626366" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.438551 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.490934 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dmcnm"] Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.492226 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.503225 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmcnm"] Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.563980 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-utilities\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.564072 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66l5x\" (UniqueName: \"kubernetes.io/projected/f7a67465-9ccf-47fb-abda-d0c701f29a82-kube-api-access-66l5x\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.564126 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-catalog-content\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.665120 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-catalog-content\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.665168 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-utilities\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.665217 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66l5x\" (UniqueName: \"kubernetes.io/projected/f7a67465-9ccf-47fb-abda-d0c701f29a82-kube-api-access-66l5x\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.666159 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-catalog-content\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.666377 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-utilities\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.721107 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66l5x\" (UniqueName: \"kubernetes.io/projected/f7a67465-9ccf-47fb-abda-d0c701f29a82-kube-api-access-66l5x\") pod \"redhat-marketplace-dmcnm\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.739902 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.773949 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sn4hh"] Nov 24 21:41:19 crc kubenswrapper[4767]: W1124 21:41:19.792326 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e342052_636d_42a3_a409_57cc627ec192.slice/crio-73af8e1e48928e096645485238a7e29db91eb9945dd384ba387d968ac2e829ea WatchSource:0}: Error finding container 73af8e1e48928e096645485238a7e29db91eb9945dd384ba387d968ac2e829ea: Status 404 returned error can't find the container with id 73af8e1e48928e096645485238a7e29db91eb9945dd384ba387d968ac2e829ea Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.834330 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.867784 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0934816-1e19-4894-a691-f3e53551062a-secret-volume\") pod \"b0934816-1e19-4894-a691-f3e53551062a\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.867829 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nptk\" (UniqueName: \"kubernetes.io/projected/b0934816-1e19-4894-a691-f3e53551062a-kube-api-access-8nptk\") pod \"b0934816-1e19-4894-a691-f3e53551062a\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.867928 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0934816-1e19-4894-a691-f3e53551062a-config-volume\") pod \"b0934816-1e19-4894-a691-f3e53551062a\" (UID: \"b0934816-1e19-4894-a691-f3e53551062a\") " Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.868811 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0934816-1e19-4894-a691-f3e53551062a-config-volume" (OuterVolumeSpecName: "config-volume") pod "b0934816-1e19-4894-a691-f3e53551062a" (UID: "b0934816-1e19-4894-a691-f3e53551062a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.873155 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0934816-1e19-4894-a691-f3e53551062a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b0934816-1e19-4894-a691-f3e53551062a" (UID: "b0934816-1e19-4894-a691-f3e53551062a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.873463 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0934816-1e19-4894-a691-f3e53551062a-kube-api-access-8nptk" (OuterVolumeSpecName: "kube-api-access-8nptk") pod "b0934816-1e19-4894-a691-f3e53551062a" (UID: "b0934816-1e19-4894-a691-f3e53551062a"). InnerVolumeSpecName "kube-api-access-8nptk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.968881 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.969392 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0934816-1e19-4894-a691-f3e53551062a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.969419 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0934816-1e19-4894-a691-f3e53551062a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.969428 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nptk\" (UniqueName: \"kubernetes.io/projected/b0934816-1e19-4894-a691-f3e53551062a-kube-api-access-8nptk\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4767]: E1124 21:41:19.969449 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0934816-1e19-4894-a691-f3e53551062a" containerName="collect-profiles" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.969465 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0934816-1e19-4894-a691-f3e53551062a" containerName="collect-profiles" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.969630 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0934816-1e19-4894-a691-f3e53551062a" containerName="collect-profiles" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.970205 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.972127 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.972888 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 21:41:19 crc kubenswrapper[4767]: I1124 21:41:19.982138 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.053762 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.053833 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.065464 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.071615 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.071710 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.072508 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vwm65"] Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.073798 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.078734 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.079750 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmcnm"] Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.086178 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwm65"] Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.173401 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.173456 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.173480 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dprs\" (UniqueName: \"kubernetes.io/projected/6fbb795f-ff35-4157-980c-baed2936f39e-kube-api-access-6dprs\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.173512 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-catalog-content\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.173595 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-utilities\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.175052 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.193533 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.214087 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-4d8cc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.214128 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-4d8cc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.214143 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4d8cc" podUID="6b463b5d-b072-4032-aa46-9abe955f901b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.214174 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4d8cc" podUID="6b463b5d-b072-4032-aa46-9abe955f901b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.238690 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.238733 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.241180 4767 patch_prober.go:28] interesting pod/console-f9d7485db-mp4ng container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.241221 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mp4ng" podUID="86bad83e-cde9-43a8-803a-fda0e14ef559" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.274829 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-utilities\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.274931 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dprs\" (UniqueName: \"kubernetes.io/projected/6fbb795f-ff35-4157-980c-baed2936f39e-kube-api-access-6dprs\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.274971 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-catalog-content\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.275776 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-utilities\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.277252 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-catalog-content\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.291230 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dprs\" (UniqueName: \"kubernetes.io/projected/6fbb795f-ff35-4157-980c-baed2936f39e-kube-api-access-6dprs\") pod \"redhat-operators-vwm65\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.299472 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.373310 4767 generic.go:334] "Generic (PLEG): container finished" podID="841bea93-8bc2-48e5-8e65-a98e32e934b4" containerID="2abf6487be754e03366b34dd4064c40e935ad8c70233fc73d876acdbe3a6b1b6" exitCode=0 Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.373377 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"841bea93-8bc2-48e5-8e65-a98e32e934b4","Type":"ContainerDied","Data":"2abf6487be754e03366b34dd4064c40e935ad8c70233fc73d876acdbe3a6b1b6"} Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.376347 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" event={"ID":"b0934816-1e19-4894-a691-f3e53551062a","Type":"ContainerDied","Data":"0b72810aa97e649d31b9d71f00863f26f8f6bc428e0743f6a3f86cd3e85f3e28"} Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.376381 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b72810aa97e649d31b9d71f00863f26f8f6bc428e0743f6a3f86cd3e85f3e28" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.376390 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.380384 4767 generic.go:334] "Generic (PLEG): container finished" podID="4e342052-636d-42a3-a409-57cc627ec192" containerID="ec023dadb8b7467848800b5e0adfac2d3168fb7fd2b5009c0f4bec14248c675e" exitCode=0 Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.380480 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn4hh" event={"ID":"4e342052-636d-42a3-a409-57cc627ec192","Type":"ContainerDied","Data":"ec023dadb8b7467848800b5e0adfac2d3168fb7fd2b5009c0f4bec14248c675e"} Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.380516 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn4hh" event={"ID":"4e342052-636d-42a3-a409-57cc627ec192","Type":"ContainerStarted","Data":"73af8e1e48928e096645485238a7e29db91eb9945dd384ba387d968ac2e829ea"} Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.382064 4767 generic.go:334] "Generic (PLEG): container finished" podID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerID="d2cbe2573337950c41a4ae85d2a2634099655500666943565798317f2bbb2fa5" exitCode=0 Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.382143 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmcnm" event={"ID":"f7a67465-9ccf-47fb-abda-d0c701f29a82","Type":"ContainerDied","Data":"d2cbe2573337950c41a4ae85d2a2634099655500666943565798317f2bbb2fa5"} Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.382197 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmcnm" event={"ID":"f7a67465-9ccf-47fb-abda-d0c701f29a82","Type":"ContainerStarted","Data":"781458a53490b46d039af87f455547a13932b4e44e7e720d52f631f90e71423a"} Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.392069 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vdb2k" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.394762 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.397769 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:20 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:20 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:20 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.397820 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.401721 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.480145 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jbxc8"] Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.483907 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.503196 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jbxc8"] Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.578828 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg4ff\" (UniqueName: \"kubernetes.io/projected/17c5b830-cb3a-4c80-984a-873e874152ab-kube-api-access-cg4ff\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.578913 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-catalog-content\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.578947 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-utilities\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.684241 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-utilities\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.684621 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg4ff\" (UniqueName: \"kubernetes.io/projected/17c5b830-cb3a-4c80-984a-873e874152ab-kube-api-access-cg4ff\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.684669 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-catalog-content\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.685167 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-catalog-content\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.685321 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-utilities\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.712047 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg4ff\" (UniqueName: \"kubernetes.io/projected/17c5b830-cb3a-4c80-984a-873e874152ab-kube-api-access-cg4ff\") pod \"redhat-operators-jbxc8\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.813074 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.823880 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.928349 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:41:20 crc kubenswrapper[4767]: W1124 21:41:20.935895 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8f094a59_e96b_4f46_b5d0_95bd70db27d4.slice/crio-aa97e9ee48345b8e6e2c9e328eb205c1b97f75304d636f73e2c89c830a6fa7b7 WatchSource:0}: Error finding container aa97e9ee48345b8e6e2c9e328eb205c1b97f75304d636f73e2c89c830a6fa7b7: Status 404 returned error can't find the container with id aa97e9ee48345b8e6e2c9e328eb205c1b97f75304d636f73e2c89c830a6fa7b7 Nov 24 21:41:20 crc kubenswrapper[4767]: I1124 21:41:20.989665 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwm65"] Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.105663 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jbxc8"] Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.397479 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:21 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:21 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:21 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.397533 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.408881 4767 generic.go:334] "Generic (PLEG): container finished" podID="17c5b830-cb3a-4c80-984a-873e874152ab" containerID="4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0" exitCode=0 Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.408981 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbxc8" event={"ID":"17c5b830-cb3a-4c80-984a-873e874152ab","Type":"ContainerDied","Data":"4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0"} Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.409013 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbxc8" event={"ID":"17c5b830-cb3a-4c80-984a-873e874152ab","Type":"ContainerStarted","Data":"2e98976ed215c287a6cce0419e955dcfeca4dcc48c0b78eeefa80ea9b726baa1"} Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.415216 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f094a59-e96b-4f46-b5d0-95bd70db27d4","Type":"ContainerStarted","Data":"fe035fc51d7a84d04180c74268a6a407af6897d253026ba645549f3b2e78619a"} Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.415255 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f094a59-e96b-4f46-b5d0-95bd70db27d4","Type":"ContainerStarted","Data":"aa97e9ee48345b8e6e2c9e328eb205c1b97f75304d636f73e2c89c830a6fa7b7"} Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.441927 4767 generic.go:334] "Generic (PLEG): container finished" podID="6fbb795f-ff35-4157-980c-baed2936f39e" containerID="a9c4f488e9c21b1282e26284cf71e85a60df111cfbe256546a54d594c98a990c" exitCode=0 Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.443120 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwm65" event={"ID":"6fbb795f-ff35-4157-980c-baed2936f39e","Type":"ContainerDied","Data":"a9c4f488e9c21b1282e26284cf71e85a60df111cfbe256546a54d594c98a990c"} Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.443147 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwm65" event={"ID":"6fbb795f-ff35-4157-980c-baed2936f39e","Type":"ContainerStarted","Data":"4f8a6122b038a71e6e737108be9104f65efbad617c22d8614534fcc9efc75c1b"} Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.475540 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.475524686 podStartE2EDuration="2.475524686s" podCreationTimestamp="2025-11-24 21:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:21.45356158 +0000 UTC m=+164.370544952" watchObservedRunningTime="2025-11-24 21:41:21.475524686 +0000 UTC m=+164.392508058" Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.705493 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.817540 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/841bea93-8bc2-48e5-8e65-a98e32e934b4-kube-api-access\") pod \"841bea93-8bc2-48e5-8e65-a98e32e934b4\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.817633 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/841bea93-8bc2-48e5-8e65-a98e32e934b4-kubelet-dir\") pod \"841bea93-8bc2-48e5-8e65-a98e32e934b4\" (UID: \"841bea93-8bc2-48e5-8e65-a98e32e934b4\") " Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.817727 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/841bea93-8bc2-48e5-8e65-a98e32e934b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "841bea93-8bc2-48e5-8e65-a98e32e934b4" (UID: "841bea93-8bc2-48e5-8e65-a98e32e934b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.817930 4767 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/841bea93-8bc2-48e5-8e65-a98e32e934b4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.823417 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/841bea93-8bc2-48e5-8e65-a98e32e934b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "841bea93-8bc2-48e5-8e65-a98e32e934b4" (UID: "841bea93-8bc2-48e5-8e65-a98e32e934b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:21 crc kubenswrapper[4767]: I1124 21:41:21.919588 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/841bea93-8bc2-48e5-8e65-a98e32e934b4-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.398110 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:22 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:22 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:22 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.398181 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.451917 4767 generic.go:334] "Generic (PLEG): container finished" podID="8f094a59-e96b-4f46-b5d0-95bd70db27d4" containerID="fe035fc51d7a84d04180c74268a6a407af6897d253026ba645549f3b2e78619a" exitCode=0 Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.452020 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f094a59-e96b-4f46-b5d0-95bd70db27d4","Type":"ContainerDied","Data":"fe035fc51d7a84d04180c74268a6a407af6897d253026ba645549f3b2e78619a"} Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.457179 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"841bea93-8bc2-48e5-8e65-a98e32e934b4","Type":"ContainerDied","Data":"53ab9b7f3688ac9c4c6b6c906b4984b6b085f7cb73c3a972a8570c4eeb7df404"} Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.457214 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ab9b7f3688ac9c4c6b6c906b4984b6b085f7cb73c3a972a8570c4eeb7df404" Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.457223 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:41:22 crc kubenswrapper[4767]: I1124 21:41:22.855652 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qdgxl" Nov 24 21:41:23 crc kubenswrapper[4767]: I1124 21:41:23.416931 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:23 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:23 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:23 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:23 crc kubenswrapper[4767]: I1124 21:41:23.416986 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:24 crc kubenswrapper[4767]: I1124 21:41:24.398111 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:24 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:24 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:24 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:24 crc kubenswrapper[4767]: I1124 21:41:24.398570 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:25 crc kubenswrapper[4767]: I1124 21:41:25.398293 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:25 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:25 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:25 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:25 crc kubenswrapper[4767]: I1124 21:41:25.398351 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:25 crc kubenswrapper[4767]: I1124 21:41:25.571800 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:41:25 crc kubenswrapper[4767]: I1124 21:41:25.591965 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3c69a6-6755-47bf-8e68-d70004d77621-metrics-certs\") pod \"network-metrics-daemon-q9q7p\" (UID: \"3b3c69a6-6755-47bf-8e68-d70004d77621\") " pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:41:25 crc kubenswrapper[4767]: I1124 21:41:25.852511 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q9q7p" Nov 24 21:41:26 crc kubenswrapper[4767]: I1124 21:41:26.397458 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:26 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:26 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:26 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:26 crc kubenswrapper[4767]: I1124 21:41:26.397539 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:27 crc kubenswrapper[4767]: I1124 21:41:27.396814 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:27 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:27 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:27 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:27 crc kubenswrapper[4767]: I1124 21:41:27.396909 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:28 crc kubenswrapper[4767]: I1124 21:41:28.397035 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:28 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:28 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:28 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:28 crc kubenswrapper[4767]: I1124 21:41:28.397084 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:29 crc kubenswrapper[4767]: I1124 21:41:29.396520 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:29 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:29 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:29 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:29 crc kubenswrapper[4767]: I1124 21:41:29.396637 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.233736 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4d8cc" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.238595 4767 patch_prober.go:28] interesting pod/console-f9d7485db-mp4ng container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.238661 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mp4ng" podUID="86bad83e-cde9-43a8-803a-fda0e14ef559" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.341076 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.397696 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:30 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:30 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:30 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.397757 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.439783 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kubelet-dir\") pod \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.439903 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kube-api-access\") pod \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\" (UID: \"8f094a59-e96b-4f46-b5d0-95bd70db27d4\") " Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.440349 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f094a59-e96b-4f46-b5d0-95bd70db27d4" (UID: "8f094a59-e96b-4f46-b5d0-95bd70db27d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.446856 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f094a59-e96b-4f46-b5d0-95bd70db27d4" (UID: "8f094a59-e96b-4f46-b5d0-95bd70db27d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.517950 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f094a59-e96b-4f46-b5d0-95bd70db27d4","Type":"ContainerDied","Data":"aa97e9ee48345b8e6e2c9e328eb205c1b97f75304d636f73e2c89c830a6fa7b7"} Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.518008 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa97e9ee48345b8e6e2c9e328eb205c1b97f75304d636f73e2c89c830a6fa7b7" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.518081 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.541663 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4767]: I1124 21:41:30.541739 4767 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f094a59-e96b-4f46-b5d0-95bd70db27d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:31 crc kubenswrapper[4767]: I1124 21:41:31.398335 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:31 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:31 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:31 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:31 crc kubenswrapper[4767]: I1124 21:41:31.398412 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:32 crc kubenswrapper[4767]: I1124 21:41:32.006547 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q9q7p"] Nov 24 21:41:32 crc kubenswrapper[4767]: I1124 21:41:32.398557 4767 patch_prober.go:28] interesting pod/router-default-5444994796-gc994 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:41:32 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Nov 24 21:41:32 crc kubenswrapper[4767]: [+]process-running ok Nov 24 21:41:32 crc kubenswrapper[4767]: healthz check failed Nov 24 21:41:32 crc kubenswrapper[4767]: I1124 21:41:32.398639 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gc994" podUID="ba0198db-c2d9-4b09-bb3c-88f60a4382c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:41:33 crc kubenswrapper[4767]: I1124 21:41:33.397836 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:33 crc kubenswrapper[4767]: I1124 21:41:33.400295 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gc994" Nov 24 21:41:35 crc kubenswrapper[4767]: I1124 21:41:35.481137 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:41:35 crc kubenswrapper[4767]: I1124 21:41:35.481727 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:41:36 crc kubenswrapper[4767]: I1124 21:41:36.342344 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:41:36 crc kubenswrapper[4767]: I1124 21:41:36.552878 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" event={"ID":"3b3c69a6-6755-47bf-8e68-d70004d77621","Type":"ContainerStarted","Data":"d5d518eb0b9308277c26f45ad03322dabb4ba3905c12f957e6897cc5676a6e50"} Nov 24 21:41:37 crc kubenswrapper[4767]: I1124 21:41:37.898121 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:41:40 crc kubenswrapper[4767]: I1124 21:41:40.372089 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:40 crc kubenswrapper[4767]: I1124 21:41:40.376886 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:41:49 crc kubenswrapper[4767]: E1124 21:41:49.720659 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 21:41:49 crc kubenswrapper[4767]: E1124 21:41:49.721368 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr9br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zztbr_openshift-marketplace(0dcd966d-f62e-4fa8-9f85-a99fa95cf673): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:41:49 crc kubenswrapper[4767]: E1124 21:41:49.722715 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zztbr" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" Nov 24 21:41:49 crc kubenswrapper[4767]: E1124 21:41:49.800027 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 21:41:49 crc kubenswrapper[4767]: E1124 21:41:49.800245 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljchs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6rsd6_openshift-marketplace(5749cc38-18d2-411b-b0e8-20dade9fbcfb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:41:49 crc kubenswrapper[4767]: E1124 21:41:49.801471 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6rsd6" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" Nov 24 21:41:50 crc kubenswrapper[4767]: I1124 21:41:50.750520 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5klgr" Nov 24 21:41:50 crc kubenswrapper[4767]: E1124 21:41:50.918816 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 21:41:50 crc kubenswrapper[4767]: E1124 21:41:50.918978 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cg4ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jbxc8_openshift-marketplace(17c5b830-cb3a-4c80-984a-873e874152ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:41:50 crc kubenswrapper[4767]: E1124 21:41:50.920171 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jbxc8" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" Nov 24 21:41:52 crc kubenswrapper[4767]: E1124 21:41:52.124126 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jbxc8" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" Nov 24 21:41:52 crc kubenswrapper[4767]: E1124 21:41:52.124146 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6rsd6" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" Nov 24 21:41:52 crc kubenswrapper[4767]: E1124 21:41:52.124448 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zztbr" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" Nov 24 21:41:52 crc kubenswrapper[4767]: E1124 21:41:52.349625 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 21:41:52 crc kubenswrapper[4767]: E1124 21:41:52.349775 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bw7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x6bsn_openshift-marketplace(dc8951ce-1595-45e8-a952-9629251645c1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:41:52 crc kubenswrapper[4767]: E1124 21:41:52.351134 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x6bsn" podUID="dc8951ce-1595-45e8-a952-9629251645c1" Nov 24 21:41:53 crc kubenswrapper[4767]: E1124 21:41:53.776057 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x6bsn" podUID="dc8951ce-1595-45e8-a952-9629251645c1" Nov 24 21:41:54 crc kubenswrapper[4767]: I1124 21:41:54.657650 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" event={"ID":"3b3c69a6-6755-47bf-8e68-d70004d77621","Type":"ContainerStarted","Data":"52cfe8194ae1e6b03391a9b05f445035050839b576d9fc87e8bdc14fa1b7f7c3"} Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.664413 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.665490 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66l5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dmcnm_openshift-marketplace(f7a67465-9ccf-47fb-abda-d0c701f29a82): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.666872 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dmcnm" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.736668 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.736823 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcnhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fj9vz_openshift-marketplace(88e527cd-5ef0-49bd-bfde-7321ba67bb7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.738063 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fj9vz" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.926522 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.926700 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l2hx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sn4hh_openshift-marketplace(4e342052-636d-42a3-a409-57cc627ec192): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:41:55 crc kubenswrapper[4767]: E1124 21:41:55.928482 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sn4hh" podUID="4e342052-636d-42a3-a409-57cc627ec192" Nov 24 21:41:56 crc kubenswrapper[4767]: E1124 21:41:56.671835 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sn4hh" podUID="4e342052-636d-42a3-a409-57cc627ec192" Nov 24 21:41:56 crc kubenswrapper[4767]: E1124 21:41:56.672203 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dmcnm" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" Nov 24 21:41:56 crc kubenswrapper[4767]: E1124 21:41:56.672252 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fj9vz" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" Nov 24 21:41:57 crc kubenswrapper[4767]: I1124 21:41:57.678489 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q9q7p" event={"ID":"3b3c69a6-6755-47bf-8e68-d70004d77621","Type":"ContainerStarted","Data":"e9a7c7a86931a0fa4da100f1056b017f35f7971b12a14ebcd572a7ddcf477f91"} Nov 24 21:41:57 crc kubenswrapper[4767]: I1124 21:41:57.682761 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwm65" event={"ID":"6fbb795f-ff35-4157-980c-baed2936f39e","Type":"ContainerStarted","Data":"130c06ce3a1741ada569a30971ceb34d6452d7a81f8417e6279609020d8a4ef8"} Nov 24 21:41:57 crc kubenswrapper[4767]: I1124 21:41:57.702634 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q9q7p" podStartSLOduration=174.702608973 podStartE2EDuration="2m54.702608973s" podCreationTimestamp="2025-11-24 21:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:57.701556465 +0000 UTC m=+200.618539867" watchObservedRunningTime="2025-11-24 21:41:57.702608973 +0000 UTC m=+200.619592385" Nov 24 21:41:58 crc kubenswrapper[4767]: I1124 21:41:58.689173 4767 generic.go:334] "Generic (PLEG): container finished" podID="6fbb795f-ff35-4157-980c-baed2936f39e" containerID="130c06ce3a1741ada569a30971ceb34d6452d7a81f8417e6279609020d8a4ef8" exitCode=0 Nov 24 21:41:58 crc kubenswrapper[4767]: I1124 21:41:58.689231 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwm65" event={"ID":"6fbb795f-ff35-4157-980c-baed2936f39e","Type":"ContainerDied","Data":"130c06ce3a1741ada569a30971ceb34d6452d7a81f8417e6279609020d8a4ef8"} Nov 24 21:41:59 crc kubenswrapper[4767]: I1124 21:41:59.696710 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwm65" event={"ID":"6fbb795f-ff35-4157-980c-baed2936f39e","Type":"ContainerStarted","Data":"35c579af1f880c5e4035a0baaf75b4954a46498ee162433515a5322e19f36e73"} Nov 24 21:41:59 crc kubenswrapper[4767]: I1124 21:41:59.717713 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vwm65" podStartSLOduration=2.008530909 podStartE2EDuration="39.717694309s" podCreationTimestamp="2025-11-24 21:41:20 +0000 UTC" firstStartedPulling="2025-11-24 21:41:21.445514461 +0000 UTC m=+164.362497833" lastFinishedPulling="2025-11-24 21:41:59.154677871 +0000 UTC m=+202.071661233" observedRunningTime="2025-11-24 21:41:59.717662678 +0000 UTC m=+202.634646050" watchObservedRunningTime="2025-11-24 21:41:59.717694309 +0000 UTC m=+202.634677681" Nov 24 21:42:00 crc kubenswrapper[4767]: I1124 21:42:00.402490 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:42:00 crc kubenswrapper[4767]: I1124 21:42:00.402941 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:42:01 crc kubenswrapper[4767]: I1124 21:42:01.547219 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vwm65" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="registry-server" probeResult="failure" output=< Nov 24 21:42:01 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 21:42:01 crc kubenswrapper[4767]: > Nov 24 21:42:04 crc kubenswrapper[4767]: I1124 21:42:04.691047 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x76bn"] Nov 24 21:42:05 crc kubenswrapper[4767]: I1124 21:42:05.481523 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:42:05 crc kubenswrapper[4767]: I1124 21:42:05.481878 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:42:05 crc kubenswrapper[4767]: I1124 21:42:05.481929 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:42:05 crc kubenswrapper[4767]: I1124 21:42:05.482561 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:42:05 crc kubenswrapper[4767]: I1124 21:42:05.482702 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9" gracePeriod=600 Nov 24 21:42:05 crc kubenswrapper[4767]: I1124 21:42:05.741565 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9" exitCode=0 Nov 24 21:42:05 crc kubenswrapper[4767]: I1124 21:42:05.741962 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9"} Nov 24 21:42:06 crc kubenswrapper[4767]: I1124 21:42:06.751643 4767 generic.go:334] "Generic (PLEG): container finished" podID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerID="4257b1c622fd03e5e61df280696b1cafd782f16653d85c2648cf3a5685bb401f" exitCode=0 Nov 24 21:42:06 crc kubenswrapper[4767]: I1124 21:42:06.751706 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zztbr" event={"ID":"0dcd966d-f62e-4fa8-9f85-a99fa95cf673","Type":"ContainerDied","Data":"4257b1c622fd03e5e61df280696b1cafd782f16653d85c2648cf3a5685bb401f"} Nov 24 21:42:06 crc kubenswrapper[4767]: I1124 21:42:06.756617 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"318061ec20e01e7b9e6b9071eca399b8371f6aa151e176eee69db149828d7014"} Nov 24 21:42:07 crc kubenswrapper[4767]: I1124 21:42:07.762122 4767 generic.go:334] "Generic (PLEG): container finished" podID="dc8951ce-1595-45e8-a952-9629251645c1" containerID="13254a02852ae0c5a22a4b1ad8eb4ca276cef6d68671d0e08833678336405231" exitCode=0 Nov 24 21:42:07 crc kubenswrapper[4767]: I1124 21:42:07.762221 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6bsn" event={"ID":"dc8951ce-1595-45e8-a952-9629251645c1","Type":"ContainerDied","Data":"13254a02852ae0c5a22a4b1ad8eb4ca276cef6d68671d0e08833678336405231"} Nov 24 21:42:07 crc kubenswrapper[4767]: I1124 21:42:07.768489 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zztbr" event={"ID":"0dcd966d-f62e-4fa8-9f85-a99fa95cf673","Type":"ContainerStarted","Data":"dbfb62f29f4801e40a5654dd9cca07cdbb96575183221d2b3628beb5d786978e"} Nov 24 21:42:07 crc kubenswrapper[4767]: I1124 21:42:07.800872 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zztbr" podStartSLOduration=3.009211389 podStartE2EDuration="50.800849739s" podCreationTimestamp="2025-11-24 21:41:17 +0000 UTC" firstStartedPulling="2025-11-24 21:41:19.360442046 +0000 UTC m=+162.277425418" lastFinishedPulling="2025-11-24 21:42:07.152080396 +0000 UTC m=+210.069063768" observedRunningTime="2025-11-24 21:42:07.799073451 +0000 UTC m=+210.716056843" watchObservedRunningTime="2025-11-24 21:42:07.800849739 +0000 UTC m=+210.717833111" Nov 24 21:42:07 crc kubenswrapper[4767]: I1124 21:42:07.804419 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:42:07 crc kubenswrapper[4767]: I1124 21:42:07.804462 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:42:08 crc kubenswrapper[4767]: I1124 21:42:08.775295 4767 generic.go:334] "Generic (PLEG): container finished" podID="17c5b830-cb3a-4c80-984a-873e874152ab" containerID="de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509" exitCode=0 Nov 24 21:42:08 crc kubenswrapper[4767]: I1124 21:42:08.775316 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbxc8" event={"ID":"17c5b830-cb3a-4c80-984a-873e874152ab","Type":"ContainerDied","Data":"de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509"} Nov 24 21:42:08 crc kubenswrapper[4767]: I1124 21:42:08.780693 4767 generic.go:334] "Generic (PLEG): container finished" podID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerID="9d05af0b2d6f1de7db9851870df8b3d883107fc0545a5bc88a794ae078a8b4dd" exitCode=0 Nov 24 21:42:08 crc kubenswrapper[4767]: I1124 21:42:08.780776 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rsd6" event={"ID":"5749cc38-18d2-411b-b0e8-20dade9fbcfb","Type":"ContainerDied","Data":"9d05af0b2d6f1de7db9851870df8b3d883107fc0545a5bc88a794ae078a8b4dd"} Nov 24 21:42:08 crc kubenswrapper[4767]: I1124 21:42:08.854052 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zztbr" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="registry-server" probeResult="failure" output=< Nov 24 21:42:08 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 21:42:08 crc kubenswrapper[4767]: > Nov 24 21:42:09 crc kubenswrapper[4767]: I1124 21:42:09.788222 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6bsn" event={"ID":"dc8951ce-1595-45e8-a952-9629251645c1","Type":"ContainerStarted","Data":"7f88c7433d497a23286eaf1606db45546831528ea5ab2e55a92a578d9f27dbe1"} Nov 24 21:42:09 crc kubenswrapper[4767]: I1124 21:42:09.822590 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x6bsn" podStartSLOduration=3.839456683 podStartE2EDuration="53.822566844s" podCreationTimestamp="2025-11-24 21:41:16 +0000 UTC" firstStartedPulling="2025-11-24 21:41:18.338931623 +0000 UTC m=+161.255914995" lastFinishedPulling="2025-11-24 21:42:08.322041784 +0000 UTC m=+211.239025156" observedRunningTime="2025-11-24 21:42:09.815942496 +0000 UTC m=+212.732925868" watchObservedRunningTime="2025-11-24 21:42:09.822566844 +0000 UTC m=+212.739550216" Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.463067 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.509936 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.807998 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbxc8" event={"ID":"17c5b830-cb3a-4c80-984a-873e874152ab","Type":"ContainerStarted","Data":"d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16"} Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.812260 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rsd6" event={"ID":"5749cc38-18d2-411b-b0e8-20dade9fbcfb","Type":"ContainerStarted","Data":"89042bc3446893548691f9571da335bfd98d8675ff5408be006f0d46bcd09ca3"} Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.813676 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.813770 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.828434 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jbxc8" podStartSLOduration=1.769716078 podStartE2EDuration="50.828411941s" podCreationTimestamp="2025-11-24 21:41:20 +0000 UTC" firstStartedPulling="2025-11-24 21:41:21.412332485 +0000 UTC m=+164.329315857" lastFinishedPulling="2025-11-24 21:42:10.471028348 +0000 UTC m=+213.388011720" observedRunningTime="2025-11-24 21:42:10.826746767 +0000 UTC m=+213.743730139" watchObservedRunningTime="2025-11-24 21:42:10.828411941 +0000 UTC m=+213.745395313" Nov 24 21:42:10 crc kubenswrapper[4767]: I1124 21:42:10.847150 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rsd6" podStartSLOduration=1.776543207 podStartE2EDuration="53.847133114s" podCreationTimestamp="2025-11-24 21:41:17 +0000 UTC" firstStartedPulling="2025-11-24 21:41:18.335972829 +0000 UTC m=+161.252956201" lastFinishedPulling="2025-11-24 21:42:10.406562736 +0000 UTC m=+213.323546108" observedRunningTime="2025-11-24 21:42:10.845724886 +0000 UTC m=+213.762708258" watchObservedRunningTime="2025-11-24 21:42:10.847133114 +0000 UTC m=+213.764116486" Nov 24 21:42:11 crc kubenswrapper[4767]: I1124 21:42:11.819496 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj9vz" event={"ID":"88e527cd-5ef0-49bd-bfde-7321ba67bb7e","Type":"ContainerStarted","Data":"2e350c2511872c35bc61900a2db4768852da12cfc3ac89989706d2f4f8ec28c5"} Nov 24 21:42:11 crc kubenswrapper[4767]: I1124 21:42:11.821197 4767 generic.go:334] "Generic (PLEG): container finished" podID="4e342052-636d-42a3-a409-57cc627ec192" containerID="0ce0fc8043acf937f6dfcb61b19686f272af14d2415b671f403e143372f7c25b" exitCode=0 Nov 24 21:42:11 crc kubenswrapper[4767]: I1124 21:42:11.821296 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn4hh" event={"ID":"4e342052-636d-42a3-a409-57cc627ec192","Type":"ContainerDied","Data":"0ce0fc8043acf937f6dfcb61b19686f272af14d2415b671f403e143372f7c25b"} Nov 24 21:42:11 crc kubenswrapper[4767]: I1124 21:42:11.822860 4767 generic.go:334] "Generic (PLEG): container finished" podID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerID="4ec41fa64df7f2e3cf0b1e6c159e3e37e244cd62c1447e52a54fa933b53cc043" exitCode=0 Nov 24 21:42:11 crc kubenswrapper[4767]: I1124 21:42:11.822895 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmcnm" event={"ID":"f7a67465-9ccf-47fb-abda-d0c701f29a82","Type":"ContainerDied","Data":"4ec41fa64df7f2e3cf0b1e6c159e3e37e244cd62c1447e52a54fa933b53cc043"} Nov 24 21:42:11 crc kubenswrapper[4767]: I1124 21:42:11.861284 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jbxc8" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="registry-server" probeResult="failure" output=< Nov 24 21:42:11 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 21:42:11 crc kubenswrapper[4767]: > Nov 24 21:42:12 crc kubenswrapper[4767]: I1124 21:42:12.828507 4767 generic.go:334] "Generic (PLEG): container finished" podID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerID="2e350c2511872c35bc61900a2db4768852da12cfc3ac89989706d2f4f8ec28c5" exitCode=0 Nov 24 21:42:12 crc kubenswrapper[4767]: I1124 21:42:12.828586 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj9vz" event={"ID":"88e527cd-5ef0-49bd-bfde-7321ba67bb7e","Type":"ContainerDied","Data":"2e350c2511872c35bc61900a2db4768852da12cfc3ac89989706d2f4f8ec28c5"} Nov 24 21:42:12 crc kubenswrapper[4767]: I1124 21:42:12.831784 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn4hh" event={"ID":"4e342052-636d-42a3-a409-57cc627ec192","Type":"ContainerStarted","Data":"a9cc17d9be993e073ba2942965c5ae5902b1ec90e09ee2d558a31b1cbb360f32"} Nov 24 21:42:13 crc kubenswrapper[4767]: I1124 21:42:13.839696 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmcnm" event={"ID":"f7a67465-9ccf-47fb-abda-d0c701f29a82","Type":"ContainerStarted","Data":"88928990be39ab08e531bf4a486554c74f5ceea19eaa537a35d7a204f1684203"} Nov 24 21:42:13 crc kubenswrapper[4767]: I1124 21:42:13.859456 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sn4hh" podStartSLOduration=2.845429854 podStartE2EDuration="54.859432317s" podCreationTimestamp="2025-11-24 21:41:19 +0000 UTC" firstStartedPulling="2025-11-24 21:41:20.382681819 +0000 UTC m=+163.299665191" lastFinishedPulling="2025-11-24 21:42:12.396684282 +0000 UTC m=+215.313667654" observedRunningTime="2025-11-24 21:42:13.856671192 +0000 UTC m=+216.773654574" watchObservedRunningTime="2025-11-24 21:42:13.859432317 +0000 UTC m=+216.776415689" Nov 24 21:42:13 crc kubenswrapper[4767]: I1124 21:42:13.875885 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dmcnm" podStartSLOduration=2.277536796 podStartE2EDuration="54.875867078s" podCreationTimestamp="2025-11-24 21:41:19 +0000 UTC" firstStartedPulling="2025-11-24 21:41:20.383421181 +0000 UTC m=+163.300404553" lastFinishedPulling="2025-11-24 21:42:12.981751463 +0000 UTC m=+215.898734835" observedRunningTime="2025-11-24 21:42:13.873624018 +0000 UTC m=+216.790607410" watchObservedRunningTime="2025-11-24 21:42:13.875867078 +0000 UTC m=+216.792850450" Nov 24 21:42:14 crc kubenswrapper[4767]: I1124 21:42:14.845731 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj9vz" event={"ID":"88e527cd-5ef0-49bd-bfde-7321ba67bb7e","Type":"ContainerStarted","Data":"be448979e0bc30547a4325d1fc24386c9f7e846aefbf9ef5366695f7c00e3956"} Nov 24 21:42:14 crc kubenswrapper[4767]: I1124 21:42:14.865394 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fj9vz" podStartSLOduration=2.836789654 podStartE2EDuration="57.865374247s" podCreationTimestamp="2025-11-24 21:41:17 +0000 UTC" firstStartedPulling="2025-11-24 21:41:18.328856436 +0000 UTC m=+161.245839808" lastFinishedPulling="2025-11-24 21:42:13.357441029 +0000 UTC m=+216.274424401" observedRunningTime="2025-11-24 21:42:14.864168165 +0000 UTC m=+217.781151547" watchObservedRunningTime="2025-11-24 21:42:14.865374247 +0000 UTC m=+217.782357619" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.199802 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.200668 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.236403 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.439530 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.439615 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.486212 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.607148 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.607421 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.641097 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.843400 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.902687 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.905692 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:42:17 crc kubenswrapper[4767]: I1124 21:42:17.913091 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:42:18 crc kubenswrapper[4767]: I1124 21:42:18.917334 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.438727 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.439212 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.478020 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.835299 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.835351 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.875188 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.923187 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:42:19 crc kubenswrapper[4767]: I1124 21:42:19.923782 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:42:20 crc kubenswrapper[4767]: I1124 21:42:20.146126 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zztbr"] Nov 24 21:42:20 crc kubenswrapper[4767]: I1124 21:42:20.146386 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zztbr" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="registry-server" containerID="cri-o://dbfb62f29f4801e40a5654dd9cca07cdbb96575183221d2b3628beb5d786978e" gracePeriod=2 Nov 24 21:42:20 crc kubenswrapper[4767]: E1124 21:42:20.287428 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dcd966d_f62e_4fa8_9f85_a99fa95cf673.slice/crio-dbfb62f29f4801e40a5654dd9cca07cdbb96575183221d2b3628beb5d786978e.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:42:20 crc kubenswrapper[4767]: I1124 21:42:20.881293 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:42:20 crc kubenswrapper[4767]: I1124 21:42:20.894099 4767 generic.go:334] "Generic (PLEG): container finished" podID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerID="dbfb62f29f4801e40a5654dd9cca07cdbb96575183221d2b3628beb5d786978e" exitCode=0 Nov 24 21:42:20 crc kubenswrapper[4767]: I1124 21:42:20.894143 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zztbr" event={"ID":"0dcd966d-f62e-4fa8-9f85-a99fa95cf673","Type":"ContainerDied","Data":"dbfb62f29f4801e40a5654dd9cca07cdbb96575183221d2b3628beb5d786978e"} Nov 24 21:42:20 crc kubenswrapper[4767]: I1124 21:42:20.950457 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.151588 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fj9vz"] Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.151959 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fj9vz" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="registry-server" containerID="cri-o://be448979e0bc30547a4325d1fc24386c9f7e846aefbf9ef5366695f7c00e3956" gracePeriod=2 Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.792836 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.829887 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-catalog-content\") pod \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.830093 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr9br\" (UniqueName: \"kubernetes.io/projected/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-kube-api-access-tr9br\") pod \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.832188 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-utilities" (OuterVolumeSpecName: "utilities") pod "0dcd966d-f62e-4fa8-9f85-a99fa95cf673" (UID: "0dcd966d-f62e-4fa8-9f85-a99fa95cf673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.832362 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-utilities\") pod \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\" (UID: \"0dcd966d-f62e-4fa8-9f85-a99fa95cf673\") " Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.832749 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.837174 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-kube-api-access-tr9br" (OuterVolumeSpecName: "kube-api-access-tr9br") pod "0dcd966d-f62e-4fa8-9f85-a99fa95cf673" (UID: "0dcd966d-f62e-4fa8-9f85-a99fa95cf673"). InnerVolumeSpecName "kube-api-access-tr9br". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.889307 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dcd966d-f62e-4fa8-9f85-a99fa95cf673" (UID: "0dcd966d-f62e-4fa8-9f85-a99fa95cf673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.907832 4767 generic.go:334] "Generic (PLEG): container finished" podID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerID="be448979e0bc30547a4325d1fc24386c9f7e846aefbf9ef5366695f7c00e3956" exitCode=0 Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.907956 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj9vz" event={"ID":"88e527cd-5ef0-49bd-bfde-7321ba67bb7e","Type":"ContainerDied","Data":"be448979e0bc30547a4325d1fc24386c9f7e846aefbf9ef5366695f7c00e3956"} Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.911654 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zztbr" event={"ID":"0dcd966d-f62e-4fa8-9f85-a99fa95cf673","Type":"ContainerDied","Data":"9a855bcf5df3afc9b39cd8a8babced31248797f1efa02ce14c42e7a1ba7ade2d"} Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.911683 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zztbr" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.911732 4767 scope.go:117] "RemoveContainer" containerID="dbfb62f29f4801e40a5654dd9cca07cdbb96575183221d2b3628beb5d786978e" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.930134 4767 scope.go:117] "RemoveContainer" containerID="4257b1c622fd03e5e61df280696b1cafd782f16653d85c2648cf3a5685bb401f" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.933539 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.933565 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr9br\" (UniqueName: \"kubernetes.io/projected/0dcd966d-f62e-4fa8-9f85-a99fa95cf673-kube-api-access-tr9br\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.940431 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zztbr"] Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.947111 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zztbr"] Nov 24 21:42:21 crc kubenswrapper[4767]: I1124 21:42:21.969783 4767 scope.go:117] "RemoveContainer" containerID="698ca21b0874153360560fe99efc3800ddc5d650ec098c0234780f79e5648a46" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.101816 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.138433 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcnhx\" (UniqueName: \"kubernetes.io/projected/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-kube-api-access-bcnhx\") pod \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.139007 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-catalog-content\") pod \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.139088 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-utilities\") pod \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\" (UID: \"88e527cd-5ef0-49bd-bfde-7321ba67bb7e\") " Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.140333 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-utilities" (OuterVolumeSpecName: "utilities") pod "88e527cd-5ef0-49bd-bfde-7321ba67bb7e" (UID: "88e527cd-5ef0-49bd-bfde-7321ba67bb7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.142303 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-kube-api-access-bcnhx" (OuterVolumeSpecName: "kube-api-access-bcnhx") pod "88e527cd-5ef0-49bd-bfde-7321ba67bb7e" (UID: "88e527cd-5ef0-49bd-bfde-7321ba67bb7e"). InnerVolumeSpecName "kube-api-access-bcnhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.217861 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88e527cd-5ef0-49bd-bfde-7321ba67bb7e" (UID: "88e527cd-5ef0-49bd-bfde-7321ba67bb7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.240713 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcnhx\" (UniqueName: \"kubernetes.io/projected/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-kube-api-access-bcnhx\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.240756 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.240772 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e527cd-5ef0-49bd-bfde-7321ba67bb7e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.319931 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" path="/var/lib/kubelet/pods/0dcd966d-f62e-4fa8-9f85-a99fa95cf673/volumes" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.551981 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmcnm"] Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.552414 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dmcnm" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="registry-server" containerID="cri-o://88928990be39ab08e531bf4a486554c74f5ceea19eaa537a35d7a204f1684203" gracePeriod=2 Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.922459 4767 generic.go:334] "Generic (PLEG): container finished" podID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerID="88928990be39ab08e531bf4a486554c74f5ceea19eaa537a35d7a204f1684203" exitCode=0 Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.922830 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmcnm" event={"ID":"f7a67465-9ccf-47fb-abda-d0c701f29a82","Type":"ContainerDied","Data":"88928990be39ab08e531bf4a486554c74f5ceea19eaa537a35d7a204f1684203"} Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.927423 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj9vz" event={"ID":"88e527cd-5ef0-49bd-bfde-7321ba67bb7e","Type":"ContainerDied","Data":"a21b6db139b985e34c46f96c0b2e515e5e62b0fa75c7913cc7e71a7773df2696"} Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.927492 4767 scope.go:117] "RemoveContainer" containerID="be448979e0bc30547a4325d1fc24386c9f7e846aefbf9ef5366695f7c00e3956" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.927637 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj9vz" Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.953328 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fj9vz"] Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.957523 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fj9vz"] Nov 24 21:42:22 crc kubenswrapper[4767]: I1124 21:42:22.963193 4767 scope.go:117] "RemoveContainer" containerID="2e350c2511872c35bc61900a2db4768852da12cfc3ac89989706d2f4f8ec28c5" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.031829 4767 scope.go:117] "RemoveContainer" containerID="b5502ec1af98346cce22fdef4c65fa04132525ff09410f31f5159863ae3b76b4" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.052851 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.153295 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-utilities\") pod \"f7a67465-9ccf-47fb-abda-d0c701f29a82\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.153456 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-catalog-content\") pod \"f7a67465-9ccf-47fb-abda-d0c701f29a82\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.153516 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66l5x\" (UniqueName: \"kubernetes.io/projected/f7a67465-9ccf-47fb-abda-d0c701f29a82-kube-api-access-66l5x\") pod \"f7a67465-9ccf-47fb-abda-d0c701f29a82\" (UID: \"f7a67465-9ccf-47fb-abda-d0c701f29a82\") " Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.154634 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-utilities" (OuterVolumeSpecName: "utilities") pod "f7a67465-9ccf-47fb-abda-d0c701f29a82" (UID: "f7a67465-9ccf-47fb-abda-d0c701f29a82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.159943 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a67465-9ccf-47fb-abda-d0c701f29a82-kube-api-access-66l5x" (OuterVolumeSpecName: "kube-api-access-66l5x") pod "f7a67465-9ccf-47fb-abda-d0c701f29a82" (UID: "f7a67465-9ccf-47fb-abda-d0c701f29a82"). InnerVolumeSpecName "kube-api-access-66l5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.172243 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7a67465-9ccf-47fb-abda-d0c701f29a82" (UID: "f7a67465-9ccf-47fb-abda-d0c701f29a82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.255381 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.255414 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66l5x\" (UniqueName: \"kubernetes.io/projected/f7a67465-9ccf-47fb-abda-d0c701f29a82-kube-api-access-66l5x\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.255426 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a67465-9ccf-47fb-abda-d0c701f29a82-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.936032 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmcnm" event={"ID":"f7a67465-9ccf-47fb-abda-d0c701f29a82","Type":"ContainerDied","Data":"781458a53490b46d039af87f455547a13932b4e44e7e720d52f631f90e71423a"} Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.936090 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmcnm" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.936108 4767 scope.go:117] "RemoveContainer" containerID="88928990be39ab08e531bf4a486554c74f5ceea19eaa537a35d7a204f1684203" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.965211 4767 scope.go:117] "RemoveContainer" containerID="4ec41fa64df7f2e3cf0b1e6c159e3e37e244cd62c1447e52a54fa933b53cc043" Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.986645 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmcnm"] Nov 24 21:42:23 crc kubenswrapper[4767]: I1124 21:42:23.997158 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmcnm"] Nov 24 21:42:24 crc kubenswrapper[4767]: I1124 21:42:24.012546 4767 scope.go:117] "RemoveContainer" containerID="d2cbe2573337950c41a4ae85d2a2634099655500666943565798317f2bbb2fa5" Nov 24 21:42:24 crc kubenswrapper[4767]: I1124 21:42:24.322730 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" path="/var/lib/kubelet/pods/88e527cd-5ef0-49bd-bfde-7321ba67bb7e/volumes" Nov 24 21:42:24 crc kubenswrapper[4767]: I1124 21:42:24.324109 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" path="/var/lib/kubelet/pods/f7a67465-9ccf-47fb-abda-d0c701f29a82/volumes" Nov 24 21:42:24 crc kubenswrapper[4767]: I1124 21:42:24.961315 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jbxc8"] Nov 24 21:42:24 crc kubenswrapper[4767]: I1124 21:42:24.961548 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jbxc8" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="registry-server" containerID="cri-o://d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16" gracePeriod=2 Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.861293 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.890204 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-catalog-content\") pod \"17c5b830-cb3a-4c80-984a-873e874152ab\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.890375 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg4ff\" (UniqueName: \"kubernetes.io/projected/17c5b830-cb3a-4c80-984a-873e874152ab-kube-api-access-cg4ff\") pod \"17c5b830-cb3a-4c80-984a-873e874152ab\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.891471 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-utilities\") pod \"17c5b830-cb3a-4c80-984a-873e874152ab\" (UID: \"17c5b830-cb3a-4c80-984a-873e874152ab\") " Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.892100 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-utilities" (OuterVolumeSpecName: "utilities") pod "17c5b830-cb3a-4c80-984a-873e874152ab" (UID: "17c5b830-cb3a-4c80-984a-873e874152ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.899394 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c5b830-cb3a-4c80-984a-873e874152ab-kube-api-access-cg4ff" (OuterVolumeSpecName: "kube-api-access-cg4ff") pod "17c5b830-cb3a-4c80-984a-873e874152ab" (UID: "17c5b830-cb3a-4c80-984a-873e874152ab"). InnerVolumeSpecName "kube-api-access-cg4ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.968284 4767 generic.go:334] "Generic (PLEG): container finished" podID="17c5b830-cb3a-4c80-984a-873e874152ab" containerID="d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16" exitCode=0 Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.968338 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbxc8" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.968335 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbxc8" event={"ID":"17c5b830-cb3a-4c80-984a-873e874152ab","Type":"ContainerDied","Data":"d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16"} Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.968524 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbxc8" event={"ID":"17c5b830-cb3a-4c80-984a-873e874152ab","Type":"ContainerDied","Data":"2e98976ed215c287a6cce0419e955dcfeca4dcc48c0b78eeefa80ea9b726baa1"} Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.968550 4767 scope.go:117] "RemoveContainer" containerID="d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.983007 4767 scope.go:117] "RemoveContainer" containerID="de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.985377 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17c5b830-cb3a-4c80-984a-873e874152ab" (UID: "17c5b830-cb3a-4c80-984a-873e874152ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.992835 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg4ff\" (UniqueName: \"kubernetes.io/projected/17c5b830-cb3a-4c80-984a-873e874152ab-kube-api-access-cg4ff\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.992867 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:25 crc kubenswrapper[4767]: I1124 21:42:25.992878 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5b830-cb3a-4c80-984a-873e874152ab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.009533 4767 scope.go:117] "RemoveContainer" containerID="4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.020631 4767 scope.go:117] "RemoveContainer" containerID="d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16" Nov 24 21:42:26 crc kubenswrapper[4767]: E1124 21:42:26.021466 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16\": container with ID starting with d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16 not found: ID does not exist" containerID="d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.021497 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16"} err="failed to get container status \"d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16\": rpc error: code = NotFound desc = could not find container \"d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16\": container with ID starting with d7df3a17c98aa9639155301175c6f6e363334d34fff70181f85f976cb4538e16 not found: ID does not exist" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.021519 4767 scope.go:117] "RemoveContainer" containerID="de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509" Nov 24 21:42:26 crc kubenswrapper[4767]: E1124 21:42:26.021835 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509\": container with ID starting with de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509 not found: ID does not exist" containerID="de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.021874 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509"} err="failed to get container status \"de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509\": rpc error: code = NotFound desc = could not find container \"de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509\": container with ID starting with de1c90ee078443bdd463ffedab05f2e1d673a2c163b79f6ad01ef59ab363f509 not found: ID does not exist" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.021901 4767 scope.go:117] "RemoveContainer" containerID="4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0" Nov 24 21:42:26 crc kubenswrapper[4767]: E1124 21:42:26.022345 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0\": container with ID starting with 4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0 not found: ID does not exist" containerID="4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.022462 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0"} err="failed to get container status \"4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0\": rpc error: code = NotFound desc = could not find container \"4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0\": container with ID starting with 4b110c87c2967c8d3b6f32c0ee42d3dea8f24b0da8336592c5ec39fb142b1de0 not found: ID does not exist" Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.295714 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jbxc8"] Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.299196 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jbxc8"] Nov 24 21:42:26 crc kubenswrapper[4767]: I1124 21:42:26.321205 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" path="/var/lib/kubelet/pods/17c5b830-cb3a-4c80-984a-873e874152ab/volumes" Nov 24 21:42:29 crc kubenswrapper[4767]: I1124 21:42:29.725106 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" podUID="311b014f-099c-4f63-a46e-ccf2684847db" containerName="oauth-openshift" containerID="cri-o://3c806da595f81eecc06992dfe5aa67a41ece2878eec9897ab7622e7aeadbae4f" gracePeriod=15 Nov 24 21:42:29 crc kubenswrapper[4767]: I1124 21:42:29.941741 4767 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x76bn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Nov 24 21:42:29 crc kubenswrapper[4767]: I1124 21:42:29.941797 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" podUID="311b014f-099c-4f63-a46e-ccf2684847db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Nov 24 21:42:29 crc kubenswrapper[4767]: I1124 21:42:29.994605 4767 generic.go:334] "Generic (PLEG): container finished" podID="311b014f-099c-4f63-a46e-ccf2684847db" containerID="3c806da595f81eecc06992dfe5aa67a41ece2878eec9897ab7622e7aeadbae4f" exitCode=0 Nov 24 21:42:29 crc kubenswrapper[4767]: I1124 21:42:29.994651 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" event={"ID":"311b014f-099c-4f63-a46e-ccf2684847db","Type":"ContainerDied","Data":"3c806da595f81eecc06992dfe5aa67a41ece2878eec9897ab7622e7aeadbae4f"} Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.123146 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145692 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-serving-cert\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145760 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-trusted-ca-bundle\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145816 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9wz\" (UniqueName: \"kubernetes.io/projected/311b014f-099c-4f63-a46e-ccf2684847db-kube-api-access-zm9wz\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145851 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-router-certs\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145877 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-idp-0-file-data\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145899 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-login\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145932 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-audit-policies\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145958 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-service-ca\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.145978 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-error\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.146006 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-cliconfig\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.146038 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-ocp-branding-template\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.146067 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-session\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.146102 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-provider-selection\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.146125 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/311b014f-099c-4f63-a46e-ccf2684847db-audit-dir\") pod \"311b014f-099c-4f63-a46e-ccf2684847db\" (UID: \"311b014f-099c-4f63-a46e-ccf2684847db\") " Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.146408 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/311b014f-099c-4f63-a46e-ccf2684847db-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.147521 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.147605 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.149131 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.152899 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.152961 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.159860 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311b014f-099c-4f63-a46e-ccf2684847db-kube-api-access-zm9wz" (OuterVolumeSpecName: "kube-api-access-zm9wz") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "kube-api-access-zm9wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.162393 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.163072 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.164438 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.178959 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.179519 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.179532 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.182863 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "311b014f-099c-4f63-a46e-ccf2684847db" (UID: "311b014f-099c-4f63-a46e-ccf2684847db"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246905 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246932 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246942 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9wz\" (UniqueName: \"kubernetes.io/projected/311b014f-099c-4f63-a46e-ccf2684847db-kube-api-access-zm9wz\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246952 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246961 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246970 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246979 4767 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.246988 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.247019 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.247030 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.247040 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.247050 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.247060 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/311b014f-099c-4f63-a46e-ccf2684847db-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: I1124 21:42:30.247071 4767 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/311b014f-099c-4f63-a46e-ccf2684847db-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:30 crc kubenswrapper[4767]: E1124 21:42:30.408404 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod311b014f_099c_4f63_a46e_ccf2684847db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod311b014f_099c_4f63_a46e_ccf2684847db.slice/crio-1d6d8a96cab40351985b41fb594a6731508bb145cb86d2f8893e89a1f63c8c58\": RecentStats: unable to find data in memory cache]" Nov 24 21:42:31 crc kubenswrapper[4767]: I1124 21:42:31.001861 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" event={"ID":"311b014f-099c-4f63-a46e-ccf2684847db","Type":"ContainerDied","Data":"1d6d8a96cab40351985b41fb594a6731508bb145cb86d2f8893e89a1f63c8c58"} Nov 24 21:42:31 crc kubenswrapper[4767]: I1124 21:42:31.001915 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x76bn" Nov 24 21:42:31 crc kubenswrapper[4767]: I1124 21:42:31.001926 4767 scope.go:117] "RemoveContainer" containerID="3c806da595f81eecc06992dfe5aa67a41ece2878eec9897ab7622e7aeadbae4f" Nov 24 21:42:31 crc kubenswrapper[4767]: I1124 21:42:31.026650 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x76bn"] Nov 24 21:42:31 crc kubenswrapper[4767]: I1124 21:42:31.031300 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x76bn"] Nov 24 21:42:32 crc kubenswrapper[4767]: I1124 21:42:32.320325 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311b014f-099c-4f63-a46e-ccf2684847db" path="/var/lib/kubelet/pods/311b014f-099c-4f63-a46e-ccf2684847db/volumes" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.071601 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7dc5844c99-5dsxp"] Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072109 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072135 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072149 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072157 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072168 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072177 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072188 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841bea93-8bc2-48e5-8e65-a98e32e934b4" containerName="pruner" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072197 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="841bea93-8bc2-48e5-8e65-a98e32e934b4" containerName="pruner" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072206 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072213 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072225 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072232 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072243 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311b014f-099c-4f63-a46e-ccf2684847db" containerName="oauth-openshift" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072250 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="311b014f-099c-4f63-a46e-ccf2684847db" containerName="oauth-openshift" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072260 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072285 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072297 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072304 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="extract-content" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072313 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072322 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072332 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072353 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072364 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072371 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072382 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072390 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072404 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f094a59-e96b-4f46-b5d0-95bd70db27d4" containerName="pruner" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072413 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f094a59-e96b-4f46-b5d0-95bd70db27d4" containerName="pruner" Nov 24 21:42:36 crc kubenswrapper[4767]: E1124 21:42:36.072422 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072429 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="extract-utilities" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072534 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="841bea93-8bc2-48e5-8e65-a98e32e934b4" containerName="pruner" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072547 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c5b830-cb3a-4c80-984a-873e874152ab" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072560 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f094a59-e96b-4f46-b5d0-95bd70db27d4" containerName="pruner" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072568 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a67465-9ccf-47fb-abda-d0c701f29a82" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072578 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dcd966d-f62e-4fa8-9f85-a99fa95cf673" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072587 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e527cd-5ef0-49bd-bfde-7321ba67bb7e" containerName="registry-server" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.072596 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="311b014f-099c-4f63-a46e-ccf2684847db" containerName="oauth-openshift" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.073045 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.079952 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.082997 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.083410 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.083420 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.083449 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.083449 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.083552 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.083771 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.084243 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.084543 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.084807 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.084988 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.092062 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.096930 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dc5844c99-5dsxp"] Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.097784 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.113040 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.123572 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.123757 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.123965 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.124248 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.124451 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.124662 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-error\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.124882 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.125042 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d6edee4-ae64-4131-810a-b4485324578a-audit-dir\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.125250 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.125418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-audit-policies\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.125819 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-login\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.125946 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjlcl\" (UniqueName: \"kubernetes.io/projected/8d6edee4-ae64-4131-810a-b4485324578a-kube-api-access-sjlcl\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.126170 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.126368 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-session\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.227884 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-error\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.228198 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.228386 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d6edee4-ae64-4131-810a-b4485324578a-audit-dir\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.228499 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.228569 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d6edee4-ae64-4131-810a-b4485324578a-audit-dir\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.228594 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-audit-policies\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.228837 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-login\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.228915 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlcl\" (UniqueName: \"kubernetes.io/projected/8d6edee4-ae64-4131-810a-b4485324578a-kube-api-access-sjlcl\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.229068 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.229152 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-session\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.229233 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.229348 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.229453 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.229525 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.229568 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.230041 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-audit-policies\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.230370 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.232478 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.232631 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.235868 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.235999 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-login\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.236837 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.238072 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-session\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.238461 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.238679 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.238771 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.240869 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d6edee4-ae64-4131-810a-b4485324578a-v4-0-config-user-template-error\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.260538 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlcl\" (UniqueName: \"kubernetes.io/projected/8d6edee4-ae64-4131-810a-b4485324578a-kube-api-access-sjlcl\") pod \"oauth-openshift-7dc5844c99-5dsxp\" (UID: \"8d6edee4-ae64-4131-810a-b4485324578a\") " pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.409365 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:36 crc kubenswrapper[4767]: I1124 21:42:36.934873 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dc5844c99-5dsxp"] Nov 24 21:42:37 crc kubenswrapper[4767]: I1124 21:42:37.043933 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" event={"ID":"8d6edee4-ae64-4131-810a-b4485324578a","Type":"ContainerStarted","Data":"9c34e4bc9df26a97d05a170520fd81502bceea462fef066bf4d9dce5f0b5f954"} Nov 24 21:42:38 crc kubenswrapper[4767]: I1124 21:42:38.053386 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" event={"ID":"8d6edee4-ae64-4131-810a-b4485324578a","Type":"ContainerStarted","Data":"e1ab1b87f8bd68254ec817ef1d75491cfe522f0937ab51c7f2a918c988dfe7e4"} Nov 24 21:42:38 crc kubenswrapper[4767]: I1124 21:42:38.053870 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:38 crc kubenswrapper[4767]: I1124 21:42:38.062216 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" Nov 24 21:42:38 crc kubenswrapper[4767]: I1124 21:42:38.088679 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7dc5844c99-5dsxp" podStartSLOduration=34.088650754 podStartE2EDuration="34.088650754s" podCreationTimestamp="2025-11-24 21:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:38.086591169 +0000 UTC m=+241.003574621" watchObservedRunningTime="2025-11-24 21:42:38.088650754 +0000 UTC m=+241.005634166" Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.908997 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rsd6"] Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.917633 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rsd6" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="registry-server" containerID="cri-o://89042bc3446893548691f9571da335bfd98d8675ff5408be006f0d46bcd09ca3" gracePeriod=30 Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.917760 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6bsn"] Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.920839 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x6bsn" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="registry-server" containerID="cri-o://7f88c7433d497a23286eaf1606db45546831528ea5ab2e55a92a578d9f27dbe1" gracePeriod=30 Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.941846 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpc7v"] Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.942371 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerName="marketplace-operator" containerID="cri-o://04075b9f1f2d922646375b0ea0c09ae956cce16bdbaad04d40fc5cac4e238400" gracePeriod=30 Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.947198 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sn4hh"] Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.947455 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sn4hh" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="registry-server" containerID="cri-o://a9cc17d9be993e073ba2942965c5ae5902b1ec90e09ee2d558a31b1cbb360f32" gracePeriod=30 Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.950424 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vwm65"] Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.950919 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vwm65" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="registry-server" containerID="cri-o://35c579af1f880c5e4035a0baaf75b4954a46498ee162433515a5322e19f36e73" gracePeriod=30 Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.957200 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tmp5k"] Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.958216 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:52 crc kubenswrapper[4767]: I1124 21:42:52.960305 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tmp5k"] Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.057170 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad09ebd3-c91e-47fc-9f29-6a6acded7085-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.057389 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxwnc\" (UniqueName: \"kubernetes.io/projected/ad09ebd3-c91e-47fc-9f29-6a6acded7085-kube-api-access-bxwnc\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.057434 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad09ebd3-c91e-47fc-9f29-6a6acded7085-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.148507 4767 generic.go:334] "Generic (PLEG): container finished" podID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerID="04075b9f1f2d922646375b0ea0c09ae956cce16bdbaad04d40fc5cac4e238400" exitCode=0 Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.148577 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" event={"ID":"0a0c5d70-78fa-42c1-9e79-745b42839d04","Type":"ContainerDied","Data":"04075b9f1f2d922646375b0ea0c09ae956cce16bdbaad04d40fc5cac4e238400"} Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.153679 4767 generic.go:334] "Generic (PLEG): container finished" podID="6fbb795f-ff35-4157-980c-baed2936f39e" containerID="35c579af1f880c5e4035a0baaf75b4954a46498ee162433515a5322e19f36e73" exitCode=0 Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.153745 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwm65" event={"ID":"6fbb795f-ff35-4157-980c-baed2936f39e","Type":"ContainerDied","Data":"35c579af1f880c5e4035a0baaf75b4954a46498ee162433515a5322e19f36e73"} Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.156359 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn4hh" event={"ID":"4e342052-636d-42a3-a409-57cc627ec192","Type":"ContainerDied","Data":"a9cc17d9be993e073ba2942965c5ae5902b1ec90e09ee2d558a31b1cbb360f32"} Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.156910 4767 generic.go:334] "Generic (PLEG): container finished" podID="4e342052-636d-42a3-a409-57cc627ec192" containerID="a9cc17d9be993e073ba2942965c5ae5902b1ec90e09ee2d558a31b1cbb360f32" exitCode=0 Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.158399 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxwnc\" (UniqueName: \"kubernetes.io/projected/ad09ebd3-c91e-47fc-9f29-6a6acded7085-kube-api-access-bxwnc\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.158464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad09ebd3-c91e-47fc-9f29-6a6acded7085-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.158501 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad09ebd3-c91e-47fc-9f29-6a6acded7085-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.159838 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad09ebd3-c91e-47fc-9f29-6a6acded7085-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.160579 4767 generic.go:334] "Generic (PLEG): container finished" podID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerID="89042bc3446893548691f9571da335bfd98d8675ff5408be006f0d46bcd09ca3" exitCode=0 Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.160674 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rsd6" event={"ID":"5749cc38-18d2-411b-b0e8-20dade9fbcfb","Type":"ContainerDied","Data":"89042bc3446893548691f9571da335bfd98d8675ff5408be006f0d46bcd09ca3"} Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.165034 4767 generic.go:334] "Generic (PLEG): container finished" podID="dc8951ce-1595-45e8-a952-9629251645c1" containerID="7f88c7433d497a23286eaf1606db45546831528ea5ab2e55a92a578d9f27dbe1" exitCode=0 Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.165070 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6bsn" event={"ID":"dc8951ce-1595-45e8-a952-9629251645c1","Type":"ContainerDied","Data":"7f88c7433d497a23286eaf1606db45546831528ea5ab2e55a92a578d9f27dbe1"} Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.167364 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad09ebd3-c91e-47fc-9f29-6a6acded7085-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.175141 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxwnc\" (UniqueName: \"kubernetes.io/projected/ad09ebd3-c91e-47fc-9f29-6a6acded7085-kube-api-access-bxwnc\") pod \"marketplace-operator-79b997595-tmp5k\" (UID: \"ad09ebd3-c91e-47fc-9f29-6a6acded7085\") " pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.277449 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.407488 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.431427 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.467696 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-utilities\") pod \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.467834 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-utilities\") pod \"dc8951ce-1595-45e8-a952-9629251645c1\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.467862 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-catalog-content\") pod \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.467901 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-catalog-content\") pod \"dc8951ce-1595-45e8-a952-9629251645c1\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.467929 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw7fn\" (UniqueName: \"kubernetes.io/projected/dc8951ce-1595-45e8-a952-9629251645c1-kube-api-access-bw7fn\") pod \"dc8951ce-1595-45e8-a952-9629251645c1\" (UID: \"dc8951ce-1595-45e8-a952-9629251645c1\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.467967 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljchs\" (UniqueName: \"kubernetes.io/projected/5749cc38-18d2-411b-b0e8-20dade9fbcfb-kube-api-access-ljchs\") pod \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\" (UID: \"5749cc38-18d2-411b-b0e8-20dade9fbcfb\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.473235 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-utilities" (OuterVolumeSpecName: "utilities") pod "5749cc38-18d2-411b-b0e8-20dade9fbcfb" (UID: "5749cc38-18d2-411b-b0e8-20dade9fbcfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.473361 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5749cc38-18d2-411b-b0e8-20dade9fbcfb-kube-api-access-ljchs" (OuterVolumeSpecName: "kube-api-access-ljchs") pod "5749cc38-18d2-411b-b0e8-20dade9fbcfb" (UID: "5749cc38-18d2-411b-b0e8-20dade9fbcfb"). InnerVolumeSpecName "kube-api-access-ljchs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.473902 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-utilities" (OuterVolumeSpecName: "utilities") pod "dc8951ce-1595-45e8-a952-9629251645c1" (UID: "dc8951ce-1595-45e8-a952-9629251645c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.475069 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8951ce-1595-45e8-a952-9629251645c1-kube-api-access-bw7fn" (OuterVolumeSpecName: "kube-api-access-bw7fn") pod "dc8951ce-1595-45e8-a952-9629251645c1" (UID: "dc8951ce-1595-45e8-a952-9629251645c1"). InnerVolumeSpecName "kube-api-access-bw7fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.488526 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.494862 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.519521 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.522897 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5749cc38-18d2-411b-b0e8-20dade9fbcfb" (UID: "5749cc38-18d2-411b-b0e8-20dade9fbcfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.534487 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc8951ce-1595-45e8-a952-9629251645c1" (UID: "dc8951ce-1595-45e8-a952-9629251645c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.541480 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tmp5k"] Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568549 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-catalog-content\") pod \"6fbb795f-ff35-4157-980c-baed2936f39e\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568595 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-utilities\") pod \"4e342052-636d-42a3-a409-57cc627ec192\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568634 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpqqq\" (UniqueName: \"kubernetes.io/projected/0a0c5d70-78fa-42c1-9e79-745b42839d04-kube-api-access-lpqqq\") pod \"0a0c5d70-78fa-42c1-9e79-745b42839d04\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568689 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-catalog-content\") pod \"4e342052-636d-42a3-a409-57cc627ec192\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568711 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dprs\" (UniqueName: \"kubernetes.io/projected/6fbb795f-ff35-4157-980c-baed2936f39e-kube-api-access-6dprs\") pod \"6fbb795f-ff35-4157-980c-baed2936f39e\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568733 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-utilities\") pod \"6fbb795f-ff35-4157-980c-baed2936f39e\" (UID: \"6fbb795f-ff35-4157-980c-baed2936f39e\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568779 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l2hx\" (UniqueName: \"kubernetes.io/projected/4e342052-636d-42a3-a409-57cc627ec192-kube-api-access-4l2hx\") pod \"4e342052-636d-42a3-a409-57cc627ec192\" (UID: \"4e342052-636d-42a3-a409-57cc627ec192\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.568801 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-trusted-ca\") pod \"0a0c5d70-78fa-42c1-9e79-745b42839d04\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569593 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-utilities" (OuterVolumeSpecName: "utilities") pod "4e342052-636d-42a3-a409-57cc627ec192" (UID: "4e342052-636d-42a3-a409-57cc627ec192"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569686 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-operator-metrics\") pod \"0a0c5d70-78fa-42c1-9e79-745b42839d04\" (UID: \"0a0c5d70-78fa-42c1-9e79-745b42839d04\") " Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569946 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljchs\" (UniqueName: \"kubernetes.io/projected/5749cc38-18d2-411b-b0e8-20dade9fbcfb-kube-api-access-ljchs\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569959 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569968 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569977 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569986 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5749cc38-18d2-411b-b0e8-20dade9fbcfb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.569994 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8951ce-1595-45e8-a952-9629251645c1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.570002 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw7fn\" (UniqueName: \"kubernetes.io/projected/dc8951ce-1595-45e8-a952-9629251645c1-kube-api-access-bw7fn\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.570328 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-utilities" (OuterVolumeSpecName: "utilities") pod "6fbb795f-ff35-4157-980c-baed2936f39e" (UID: "6fbb795f-ff35-4157-980c-baed2936f39e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.570408 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0a0c5d70-78fa-42c1-9e79-745b42839d04" (UID: "0a0c5d70-78fa-42c1-9e79-745b42839d04"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.572064 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e342052-636d-42a3-a409-57cc627ec192-kube-api-access-4l2hx" (OuterVolumeSpecName: "kube-api-access-4l2hx") pod "4e342052-636d-42a3-a409-57cc627ec192" (UID: "4e342052-636d-42a3-a409-57cc627ec192"). InnerVolumeSpecName "kube-api-access-4l2hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.572360 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0c5d70-78fa-42c1-9e79-745b42839d04-kube-api-access-lpqqq" (OuterVolumeSpecName: "kube-api-access-lpqqq") pod "0a0c5d70-78fa-42c1-9e79-745b42839d04" (UID: "0a0c5d70-78fa-42c1-9e79-745b42839d04"). InnerVolumeSpecName "kube-api-access-lpqqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.572456 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0a0c5d70-78fa-42c1-9e79-745b42839d04" (UID: "0a0c5d70-78fa-42c1-9e79-745b42839d04"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.572765 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbb795f-ff35-4157-980c-baed2936f39e-kube-api-access-6dprs" (OuterVolumeSpecName: "kube-api-access-6dprs") pod "6fbb795f-ff35-4157-980c-baed2936f39e" (UID: "6fbb795f-ff35-4157-980c-baed2936f39e"). InnerVolumeSpecName "kube-api-access-6dprs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.585747 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e342052-636d-42a3-a409-57cc627ec192" (UID: "4e342052-636d-42a3-a409-57cc627ec192"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.671166 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e342052-636d-42a3-a409-57cc627ec192-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.671187 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dprs\" (UniqueName: \"kubernetes.io/projected/6fbb795f-ff35-4157-980c-baed2936f39e-kube-api-access-6dprs\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.671196 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.671205 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l2hx\" (UniqueName: \"kubernetes.io/projected/4e342052-636d-42a3-a409-57cc627ec192-kube-api-access-4l2hx\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.671214 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.671222 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a0c5d70-78fa-42c1-9e79-745b42839d04-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.671232 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpqqq\" (UniqueName: \"kubernetes.io/projected/0a0c5d70-78fa-42c1-9e79-745b42839d04-kube-api-access-lpqqq\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.680398 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fbb795f-ff35-4157-980c-baed2936f39e" (UID: "6fbb795f-ff35-4157-980c-baed2936f39e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:53 crc kubenswrapper[4767]: I1124 21:42:53.772562 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbb795f-ff35-4157-980c-baed2936f39e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.171865 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6bsn" event={"ID":"dc8951ce-1595-45e8-a952-9629251645c1","Type":"ContainerDied","Data":"8f5633fe0dbb1bdaee5311a716db59a0e75792a0131b0c3b7f026a7b1e884a00"} Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.171906 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6bsn" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.171921 4767 scope.go:117] "RemoveContainer" containerID="7f88c7433d497a23286eaf1606db45546831528ea5ab2e55a92a578d9f27dbe1" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.173690 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" event={"ID":"0a0c5d70-78fa-42c1-9e79-745b42839d04","Type":"ContainerDied","Data":"d74e521545e3b22ae6acf8eb24d85331d79a6bf549162924a60613580bfde6b2"} Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.173704 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpc7v" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.177664 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwm65" event={"ID":"6fbb795f-ff35-4157-980c-baed2936f39e","Type":"ContainerDied","Data":"4f8a6122b038a71e6e737108be9104f65efbad617c22d8614534fcc9efc75c1b"} Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.177773 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwm65" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.181591 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn4hh" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.181589 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn4hh" event={"ID":"4e342052-636d-42a3-a409-57cc627ec192","Type":"ContainerDied","Data":"73af8e1e48928e096645485238a7e29db91eb9945dd384ba387d968ac2e829ea"} Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.183523 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" event={"ID":"ad09ebd3-c91e-47fc-9f29-6a6acded7085","Type":"ContainerStarted","Data":"9035fe5350a9dc5a0aa8ffc3148476f69fc5a6ba32bc133e6f97126e9c108945"} Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.183666 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.183884 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" event={"ID":"ad09ebd3-c91e-47fc-9f29-6a6acded7085","Type":"ContainerStarted","Data":"6fb7269af7856dc8a3266539104a34f3fe7130d6f02cdd5f8403e9e74b19426a"} Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.185864 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rsd6" event={"ID":"5749cc38-18d2-411b-b0e8-20dade9fbcfb","Type":"ContainerDied","Data":"a75fb89d3c59c6c2b1a543584259a250b347d0259d4e36541955838ea16955eb"} Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.185938 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rsd6" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.188050 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.195072 4767 scope.go:117] "RemoveContainer" containerID="13254a02852ae0c5a22a4b1ad8eb4ca276cef6d68671d0e08833678336405231" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.215796 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tmp5k" podStartSLOduration=2.215208405 podStartE2EDuration="2.215208405s" podCreationTimestamp="2025-11-24 21:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:54.209961204 +0000 UTC m=+257.126944576" watchObservedRunningTime="2025-11-24 21:42:54.215208405 +0000 UTC m=+257.132191817" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.224532 4767 scope.go:117] "RemoveContainer" containerID="a2d107a90e18ab111f38ec7b9946165ecc998d8a5ded07a26fa7a2181848a3af" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.238559 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vwm65"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.242454 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vwm65"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.256474 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6bsn"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.269450 4767 scope.go:117] "RemoveContainer" containerID="04075b9f1f2d922646375b0ea0c09ae956cce16bdbaad04d40fc5cac4e238400" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.276018 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x6bsn"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.280291 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpc7v"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.284209 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpc7v"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.287492 4767 scope.go:117] "RemoveContainer" containerID="35c579af1f880c5e4035a0baaf75b4954a46498ee162433515a5322e19f36e73" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.291986 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rsd6"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.294685 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rsd6"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.306346 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sn4hh"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.309128 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sn4hh"] Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.315216 4767 scope.go:117] "RemoveContainer" containerID="130c06ce3a1741ada569a30971ceb34d6452d7a81f8417e6279609020d8a4ef8" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.318749 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" path="/var/lib/kubelet/pods/0a0c5d70-78fa-42c1-9e79-745b42839d04/volumes" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.319181 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e342052-636d-42a3-a409-57cc627ec192" path="/var/lib/kubelet/pods/4e342052-636d-42a3-a409-57cc627ec192/volumes" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.319760 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" path="/var/lib/kubelet/pods/5749cc38-18d2-411b-b0e8-20dade9fbcfb/volumes" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.320932 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" path="/var/lib/kubelet/pods/6fbb795f-ff35-4157-980c-baed2936f39e/volumes" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.321474 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8951ce-1595-45e8-a952-9629251645c1" path="/var/lib/kubelet/pods/dc8951ce-1595-45e8-a952-9629251645c1/volumes" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.330709 4767 scope.go:117] "RemoveContainer" containerID="a9c4f488e9c21b1282e26284cf71e85a60df111cfbe256546a54d594c98a990c" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.344078 4767 scope.go:117] "RemoveContainer" containerID="a9cc17d9be993e073ba2942965c5ae5902b1ec90e09ee2d558a31b1cbb360f32" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.357099 4767 scope.go:117] "RemoveContainer" containerID="0ce0fc8043acf937f6dfcb61b19686f272af14d2415b671f403e143372f7c25b" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.368482 4767 scope.go:117] "RemoveContainer" containerID="ec023dadb8b7467848800b5e0adfac2d3168fb7fd2b5009c0f4bec14248c675e" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.378235 4767 scope.go:117] "RemoveContainer" containerID="89042bc3446893548691f9571da335bfd98d8675ff5408be006f0d46bcd09ca3" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.388483 4767 scope.go:117] "RemoveContainer" containerID="9d05af0b2d6f1de7db9851870df8b3d883107fc0545a5bc88a794ae078a8b4dd" Nov 24 21:42:54 crc kubenswrapper[4767]: I1124 21:42:54.398429 4767 scope.go:117] "RemoveContainer" containerID="0a8f68b1d67b72f91685551c4f6eaf90e1ce2b401935e1004e140e254fdab2dc" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.120802 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z47p4"] Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121024 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121038 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121050 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121058 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121072 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121080 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121089 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121097 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121106 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121117 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121126 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121134 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121143 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121151 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121161 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121169 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121179 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121186 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121196 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121205 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121216 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121223 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="extract-utilities" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121233 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121241 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="extract-content" Nov 24 21:42:55 crc kubenswrapper[4767]: E1124 21:42:55.121252 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerName="marketplace-operator" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121260 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerName="marketplace-operator" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121384 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbb795f-ff35-4157-980c-baed2936f39e" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121403 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8951ce-1595-45e8-a952-9629251645c1" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121413 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5749cc38-18d2-411b-b0e8-20dade9fbcfb" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121424 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e342052-636d-42a3-a409-57cc627ec192" containerName="registry-server" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.121433 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0c5d70-78fa-42c1-9e79-745b42839d04" containerName="marketplace-operator" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.122260 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.124335 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.137174 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z47p4"] Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.195109 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-catalog-content\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.195343 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-utilities\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.195397 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scm8k\" (UniqueName: \"kubernetes.io/projected/5e75c583-394f-42dd-84df-0dd865218112-kube-api-access-scm8k\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.296719 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scm8k\" (UniqueName: \"kubernetes.io/projected/5e75c583-394f-42dd-84df-0dd865218112-kube-api-access-scm8k\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.296876 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-catalog-content\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.296922 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-utilities\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.297625 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-utilities\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.298330 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-catalog-content\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.328982 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scm8k\" (UniqueName: \"kubernetes.io/projected/5e75c583-394f-42dd-84df-0dd865218112-kube-api-access-scm8k\") pod \"certified-operators-z47p4\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.330772 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lndk9"] Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.331861 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.333944 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.334419 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lndk9"] Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.397867 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-utilities\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.398158 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfxqp\" (UniqueName: \"kubernetes.io/projected/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-kube-api-access-gfxqp\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.398190 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-catalog-content\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.440330 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.499354 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-catalog-content\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.499442 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-utilities\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.499475 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfxqp\" (UniqueName: \"kubernetes.io/projected/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-kube-api-access-gfxqp\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.499920 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-catalog-content\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.500368 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-utilities\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.517386 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfxqp\" (UniqueName: \"kubernetes.io/projected/adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1-kube-api-access-gfxqp\") pod \"redhat-marketplace-lndk9\" (UID: \"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1\") " pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.653170 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:42:55 crc kubenswrapper[4767]: I1124 21:42:55.829779 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z47p4"] Nov 24 21:42:55 crc kubenswrapper[4767]: W1124 21:42:55.833574 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e75c583_394f_42dd_84df_0dd865218112.slice/crio-de4a2e46ff58562f16236683fed5e7037a47a021288acd1a39390cb5a6082667 WatchSource:0}: Error finding container de4a2e46ff58562f16236683fed5e7037a47a021288acd1a39390cb5a6082667: Status 404 returned error can't find the container with id de4a2e46ff58562f16236683fed5e7037a47a021288acd1a39390cb5a6082667 Nov 24 21:42:56 crc kubenswrapper[4767]: I1124 21:42:56.036401 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lndk9"] Nov 24 21:42:56 crc kubenswrapper[4767]: W1124 21:42:56.050647 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc6f6b3_70a5_4fae_a242_9ed75cb3c9b1.slice/crio-3fbc8e2d109af623c552cfbab5316985f63d26b28eb75227eb1131c2a4a604f5 WatchSource:0}: Error finding container 3fbc8e2d109af623c552cfbab5316985f63d26b28eb75227eb1131c2a4a604f5: Status 404 returned error can't find the container with id 3fbc8e2d109af623c552cfbab5316985f63d26b28eb75227eb1131c2a4a604f5 Nov 24 21:42:56 crc kubenswrapper[4767]: I1124 21:42:56.202595 4767 generic.go:334] "Generic (PLEG): container finished" podID="5e75c583-394f-42dd-84df-0dd865218112" containerID="4cbcfed91939f860474880c01edfed717207d36e3b6c48d04628d38434a2ff12" exitCode=0 Nov 24 21:42:56 crc kubenswrapper[4767]: I1124 21:42:56.202671 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47p4" event={"ID":"5e75c583-394f-42dd-84df-0dd865218112","Type":"ContainerDied","Data":"4cbcfed91939f860474880c01edfed717207d36e3b6c48d04628d38434a2ff12"} Nov 24 21:42:56 crc kubenswrapper[4767]: I1124 21:42:56.202697 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47p4" event={"ID":"5e75c583-394f-42dd-84df-0dd865218112","Type":"ContainerStarted","Data":"de4a2e46ff58562f16236683fed5e7037a47a021288acd1a39390cb5a6082667"} Nov 24 21:42:56 crc kubenswrapper[4767]: I1124 21:42:56.205658 4767 generic.go:334] "Generic (PLEG): container finished" podID="adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1" containerID="1f1d6f9abe8f8063b3c48dacd890a2870ac1d9af8b00e615becf8105480d4912" exitCode=0 Nov 24 21:42:56 crc kubenswrapper[4767]: I1124 21:42:56.207390 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lndk9" event={"ID":"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1","Type":"ContainerDied","Data":"1f1d6f9abe8f8063b3c48dacd890a2870ac1d9af8b00e615becf8105480d4912"} Nov 24 21:42:56 crc kubenswrapper[4767]: I1124 21:42:56.207452 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lndk9" event={"ID":"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1","Type":"ContainerStarted","Data":"3fbc8e2d109af623c552cfbab5316985f63d26b28eb75227eb1131c2a4a604f5"} Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.225540 4767 generic.go:334] "Generic (PLEG): container finished" podID="adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1" containerID="1348c2537423ea85820ecb4b85e1ced2f181f2c56f96905515f612bbbb256d99" exitCode=0 Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.225611 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lndk9" event={"ID":"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1","Type":"ContainerDied","Data":"1348c2537423ea85820ecb4b85e1ced2f181f2c56f96905515f612bbbb256d99"} Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.238251 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47p4" event={"ID":"5e75c583-394f-42dd-84df-0dd865218112","Type":"ContainerStarted","Data":"617347b765e96db12400cb54518775605ef85755c9267eef6837c3893e380a5c"} Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.526164 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zksv2"] Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.527497 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.529863 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.541885 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zksv2"] Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.631837 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f61f53-2472-439d-929c-29955a7d1849-utilities\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.632164 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzl8\" (UniqueName: \"kubernetes.io/projected/29f61f53-2472-439d-929c-29955a7d1849-kube-api-access-trzl8\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.632226 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f61f53-2472-439d-929c-29955a7d1849-catalog-content\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.726594 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pm5z7"] Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.727627 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.732177 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.733584 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f61f53-2472-439d-929c-29955a7d1849-catalog-content\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.733938 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f61f53-2472-439d-929c-29955a7d1849-utilities\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.734107 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trzl8\" (UniqueName: \"kubernetes.io/projected/29f61f53-2472-439d-929c-29955a7d1849-kube-api-access-trzl8\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.734112 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f61f53-2472-439d-929c-29955a7d1849-catalog-content\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.734386 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f61f53-2472-439d-929c-29955a7d1849-utilities\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.737716 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm5z7"] Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.759754 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trzl8\" (UniqueName: \"kubernetes.io/projected/29f61f53-2472-439d-929c-29955a7d1849-kube-api-access-trzl8\") pod \"community-operators-zksv2\" (UID: \"29f61f53-2472-439d-929c-29955a7d1849\") " pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.835384 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116b0d83-f4a6-4033-82fe-a29430d7b576-utilities\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.835491 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jw48\" (UniqueName: \"kubernetes.io/projected/116b0d83-f4a6-4033-82fe-a29430d7b576-kube-api-access-4jw48\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.835661 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116b0d83-f4a6-4033-82fe-a29430d7b576-catalog-content\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.853069 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.936989 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jw48\" (UniqueName: \"kubernetes.io/projected/116b0d83-f4a6-4033-82fe-a29430d7b576-kube-api-access-4jw48\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.937311 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116b0d83-f4a6-4033-82fe-a29430d7b576-catalog-content\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.937354 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116b0d83-f4a6-4033-82fe-a29430d7b576-utilities\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.937932 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116b0d83-f4a6-4033-82fe-a29430d7b576-utilities\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.938537 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116b0d83-f4a6-4033-82fe-a29430d7b576-catalog-content\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:57 crc kubenswrapper[4767]: I1124 21:42:57.983917 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jw48\" (UniqueName: \"kubernetes.io/projected/116b0d83-f4a6-4033-82fe-a29430d7b576-kube-api-access-4jw48\") pod \"redhat-operators-pm5z7\" (UID: \"116b0d83-f4a6-4033-82fe-a29430d7b576\") " pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.049920 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.061736 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zksv2"] Nov 24 21:42:58 crc kubenswrapper[4767]: W1124 21:42:58.070180 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f61f53_2472_439d_929c_29955a7d1849.slice/crio-f04e5704f66259030c34f3b456c66b5732cd5fe0f1d74b679769cdccfd7f3635 WatchSource:0}: Error finding container f04e5704f66259030c34f3b456c66b5732cd5fe0f1d74b679769cdccfd7f3635: Status 404 returned error can't find the container with id f04e5704f66259030c34f3b456c66b5732cd5fe0f1d74b679769cdccfd7f3635 Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.243639 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zksv2" event={"ID":"29f61f53-2472-439d-929c-29955a7d1849","Type":"ContainerStarted","Data":"ce306694ec9c2d5d8e5a1cb47045a21ab78a7a6a8b56bf3b8938fb015a83f7a9"} Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.243851 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zksv2" event={"ID":"29f61f53-2472-439d-929c-29955a7d1849","Type":"ContainerStarted","Data":"f04e5704f66259030c34f3b456c66b5732cd5fe0f1d74b679769cdccfd7f3635"} Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.246082 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lndk9" event={"ID":"adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1","Type":"ContainerStarted","Data":"17e11cd968e5126b8096e941e99afd73d288356b70b98492aa6fbcd50ee87bd7"} Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.252657 4767 generic.go:334] "Generic (PLEG): container finished" podID="5e75c583-394f-42dd-84df-0dd865218112" containerID="617347b765e96db12400cb54518775605ef85755c9267eef6837c3893e380a5c" exitCode=0 Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.252702 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47p4" event={"ID":"5e75c583-394f-42dd-84df-0dd865218112","Type":"ContainerDied","Data":"617347b765e96db12400cb54518775605ef85755c9267eef6837c3893e380a5c"} Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.297586 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lndk9" podStartSLOduration=1.8791334480000002 podStartE2EDuration="3.297566572s" podCreationTimestamp="2025-11-24 21:42:55 +0000 UTC" firstStartedPulling="2025-11-24 21:42:56.208656701 +0000 UTC m=+259.125640063" lastFinishedPulling="2025-11-24 21:42:57.627089815 +0000 UTC m=+260.544073187" observedRunningTime="2025-11-24 21:42:58.294117348 +0000 UTC m=+261.211100770" watchObservedRunningTime="2025-11-24 21:42:58.297566572 +0000 UTC m=+261.214549944" Nov 24 21:42:58 crc kubenswrapper[4767]: I1124 21:42:58.424675 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm5z7"] Nov 24 21:42:58 crc kubenswrapper[4767]: W1124 21:42:58.430708 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod116b0d83_f4a6_4033_82fe_a29430d7b576.slice/crio-3efe6a18212a05d014785fe7e1576a16ed9a2a76b4dcc9372a510429d53ff55b WatchSource:0}: Error finding container 3efe6a18212a05d014785fe7e1576a16ed9a2a76b4dcc9372a510429d53ff55b: Status 404 returned error can't find the container with id 3efe6a18212a05d014785fe7e1576a16ed9a2a76b4dcc9372a510429d53ff55b Nov 24 21:42:59 crc kubenswrapper[4767]: I1124 21:42:59.259923 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47p4" event={"ID":"5e75c583-394f-42dd-84df-0dd865218112","Type":"ContainerStarted","Data":"920689bb359b06a5904af46e9450ada1b12c954b5605ab1172cb5432c7b72117"} Nov 24 21:42:59 crc kubenswrapper[4767]: I1124 21:42:59.261791 4767 generic.go:334] "Generic (PLEG): container finished" podID="29f61f53-2472-439d-929c-29955a7d1849" containerID="ce306694ec9c2d5d8e5a1cb47045a21ab78a7a6a8b56bf3b8938fb015a83f7a9" exitCode=0 Nov 24 21:42:59 crc kubenswrapper[4767]: I1124 21:42:59.261887 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zksv2" event={"ID":"29f61f53-2472-439d-929c-29955a7d1849","Type":"ContainerDied","Data":"ce306694ec9c2d5d8e5a1cb47045a21ab78a7a6a8b56bf3b8938fb015a83f7a9"} Nov 24 21:42:59 crc kubenswrapper[4767]: I1124 21:42:59.265626 4767 generic.go:334] "Generic (PLEG): container finished" podID="116b0d83-f4a6-4033-82fe-a29430d7b576" containerID="4140f6c7368be492b7917d026514b4702768637e65273a11205c74be58d8c5d4" exitCode=0 Nov 24 21:42:59 crc kubenswrapper[4767]: I1124 21:42:59.265727 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5z7" event={"ID":"116b0d83-f4a6-4033-82fe-a29430d7b576","Type":"ContainerDied","Data":"4140f6c7368be492b7917d026514b4702768637e65273a11205c74be58d8c5d4"} Nov 24 21:42:59 crc kubenswrapper[4767]: I1124 21:42:59.265763 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5z7" event={"ID":"116b0d83-f4a6-4033-82fe-a29430d7b576","Type":"ContainerStarted","Data":"3efe6a18212a05d014785fe7e1576a16ed9a2a76b4dcc9372a510429d53ff55b"} Nov 24 21:42:59 crc kubenswrapper[4767]: I1124 21:42:59.280383 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z47p4" podStartSLOduration=1.6127968099999999 podStartE2EDuration="4.280362732s" podCreationTimestamp="2025-11-24 21:42:55 +0000 UTC" firstStartedPulling="2025-11-24 21:42:56.207492279 +0000 UTC m=+259.124475651" lastFinishedPulling="2025-11-24 21:42:58.875058191 +0000 UTC m=+261.792041573" observedRunningTime="2025-11-24 21:42:59.278785749 +0000 UTC m=+262.195769121" watchObservedRunningTime="2025-11-24 21:42:59.280362732 +0000 UTC m=+262.197346104" Nov 24 21:43:00 crc kubenswrapper[4767]: I1124 21:43:00.271607 4767 generic.go:334] "Generic (PLEG): container finished" podID="29f61f53-2472-439d-929c-29955a7d1849" containerID="bae2390408b45723e19b1910c949819d13975c91eba886017904c0cc98d0d9a0" exitCode=0 Nov 24 21:43:00 crc kubenswrapper[4767]: I1124 21:43:00.271695 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zksv2" event={"ID":"29f61f53-2472-439d-929c-29955a7d1849","Type":"ContainerDied","Data":"bae2390408b45723e19b1910c949819d13975c91eba886017904c0cc98d0d9a0"} Nov 24 21:43:00 crc kubenswrapper[4767]: I1124 21:43:00.274440 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5z7" event={"ID":"116b0d83-f4a6-4033-82fe-a29430d7b576","Type":"ContainerStarted","Data":"632bed7a9aeb270a3e829758786fd9e92df3941dff0a3e0243d91861083ae7cb"} Nov 24 21:43:01 crc kubenswrapper[4767]: I1124 21:43:01.280653 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zksv2" event={"ID":"29f61f53-2472-439d-929c-29955a7d1849","Type":"ContainerStarted","Data":"a670c56234279f06bd752363800f835f5ef67c15f4b67a87b44e1ee885301fff"} Nov 24 21:43:01 crc kubenswrapper[4767]: I1124 21:43:01.282831 4767 generic.go:334] "Generic (PLEG): container finished" podID="116b0d83-f4a6-4033-82fe-a29430d7b576" containerID="632bed7a9aeb270a3e829758786fd9e92df3941dff0a3e0243d91861083ae7cb" exitCode=0 Nov 24 21:43:01 crc kubenswrapper[4767]: I1124 21:43:01.282940 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5z7" event={"ID":"116b0d83-f4a6-4033-82fe-a29430d7b576","Type":"ContainerDied","Data":"632bed7a9aeb270a3e829758786fd9e92df3941dff0a3e0243d91861083ae7cb"} Nov 24 21:43:01 crc kubenswrapper[4767]: I1124 21:43:01.300338 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zksv2" podStartSLOduration=2.808588542 podStartE2EDuration="4.300319691s" podCreationTimestamp="2025-11-24 21:42:57 +0000 UTC" firstStartedPulling="2025-11-24 21:42:59.265188087 +0000 UTC m=+262.182171459" lastFinishedPulling="2025-11-24 21:43:00.756919226 +0000 UTC m=+263.673902608" observedRunningTime="2025-11-24 21:43:01.29701455 +0000 UTC m=+264.213997932" watchObservedRunningTime="2025-11-24 21:43:01.300319691 +0000 UTC m=+264.217303063" Nov 24 21:43:02 crc kubenswrapper[4767]: I1124 21:43:02.290918 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5z7" event={"ID":"116b0d83-f4a6-4033-82fe-a29430d7b576","Type":"ContainerStarted","Data":"8220e0d50654bb1fc6ed75adbb093867430a378837b08d29d91e2ce9c32c6152"} Nov 24 21:43:03 crc kubenswrapper[4767]: I1124 21:43:03.311844 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pm5z7" podStartSLOduration=3.86835892 podStartE2EDuration="6.311828518s" podCreationTimestamp="2025-11-24 21:42:57 +0000 UTC" firstStartedPulling="2025-11-24 21:42:59.266719239 +0000 UTC m=+262.183702611" lastFinishedPulling="2025-11-24 21:43:01.710188817 +0000 UTC m=+264.627172209" observedRunningTime="2025-11-24 21:43:03.310024419 +0000 UTC m=+266.227007821" watchObservedRunningTime="2025-11-24 21:43:03.311828518 +0000 UTC m=+266.228811890" Nov 24 21:43:05 crc kubenswrapper[4767]: I1124 21:43:05.442161 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:43:05 crc kubenswrapper[4767]: I1124 21:43:05.443043 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:43:05 crc kubenswrapper[4767]: I1124 21:43:05.482724 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:43:05 crc kubenswrapper[4767]: I1124 21:43:05.654077 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:43:05 crc kubenswrapper[4767]: I1124 21:43:05.654476 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:43:05 crc kubenswrapper[4767]: I1124 21:43:05.719121 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:43:06 crc kubenswrapper[4767]: I1124 21:43:06.354329 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 21:43:06 crc kubenswrapper[4767]: I1124 21:43:06.354574 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lndk9" Nov 24 21:43:07 crc kubenswrapper[4767]: I1124 21:43:07.853259 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:43:07 crc kubenswrapper[4767]: I1124 21:43:07.855524 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:43:07 crc kubenswrapper[4767]: I1124 21:43:07.914758 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:43:08 crc kubenswrapper[4767]: I1124 21:43:08.050678 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:43:08 crc kubenswrapper[4767]: I1124 21:43:08.050964 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:43:08 crc kubenswrapper[4767]: I1124 21:43:08.092479 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:43:08 crc kubenswrapper[4767]: I1124 21:43:08.361367 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pm5z7" Nov 24 21:43:08 crc kubenswrapper[4767]: I1124 21:43:08.372122 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zksv2" Nov 24 21:44:05 crc kubenswrapper[4767]: I1124 21:44:05.482227 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:44:05 crc kubenswrapper[4767]: I1124 21:44:05.483038 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:44:35 crc kubenswrapper[4767]: I1124 21:44:35.483466 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:44:35 crc kubenswrapper[4767]: I1124 21:44:35.484255 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:44:42 crc kubenswrapper[4767]: E1124 21:44:42.115785 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.142905 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv"] Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.145141 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.150297 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.151205 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.158949 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv"] Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.253183 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-secret-volume\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.253353 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8c75\" (UniqueName: \"kubernetes.io/projected/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-kube-api-access-t8c75\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.253419 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-config-volume\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.354786 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8c75\" (UniqueName: \"kubernetes.io/projected/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-kube-api-access-t8c75\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.354887 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-config-volume\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.355289 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-secret-volume\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.356878 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-config-volume\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.368899 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-secret-volume\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.384506 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8c75\" (UniqueName: \"kubernetes.io/projected/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-kube-api-access-t8c75\") pod \"collect-profiles-29400345-md6mv\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.475544 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.881209 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv"] Nov 24 21:45:00 crc kubenswrapper[4767]: I1124 21:45:00.960500 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" event={"ID":"66b98d4c-dd1c-49a7-97a5-fab5e138fefd","Type":"ContainerStarted","Data":"4c5041f849033e26dc73255f2f27d76318d8b4d9753a36f81e66f0ddffd6a344"} Nov 24 21:45:01 crc kubenswrapper[4767]: I1124 21:45:01.969655 4767 generic.go:334] "Generic (PLEG): container finished" podID="66b98d4c-dd1c-49a7-97a5-fab5e138fefd" containerID="8e78ca6e7a5d36bcd8fb07fd4e44ebbd484b67193ecd129ff945a7016e779faf" exitCode=0 Nov 24 21:45:01 crc kubenswrapper[4767]: I1124 21:45:01.969705 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" event={"ID":"66b98d4c-dd1c-49a7-97a5-fab5e138fefd","Type":"ContainerDied","Data":"8e78ca6e7a5d36bcd8fb07fd4e44ebbd484b67193ecd129ff945a7016e779faf"} Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.216832 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.300366 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-secret-volume\") pod \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.300450 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8c75\" (UniqueName: \"kubernetes.io/projected/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-kube-api-access-t8c75\") pod \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.300481 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-config-volume\") pod \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\" (UID: \"66b98d4c-dd1c-49a7-97a5-fab5e138fefd\") " Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.300882 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-config-volume" (OuterVolumeSpecName: "config-volume") pod "66b98d4c-dd1c-49a7-97a5-fab5e138fefd" (UID: "66b98d4c-dd1c-49a7-97a5-fab5e138fefd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.304995 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-kube-api-access-t8c75" (OuterVolumeSpecName: "kube-api-access-t8c75") pod "66b98d4c-dd1c-49a7-97a5-fab5e138fefd" (UID: "66b98d4c-dd1c-49a7-97a5-fab5e138fefd"). InnerVolumeSpecName "kube-api-access-t8c75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.305367 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66b98d4c-dd1c-49a7-97a5-fab5e138fefd" (UID: "66b98d4c-dd1c-49a7-97a5-fab5e138fefd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.401233 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.401285 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.401295 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8c75\" (UniqueName: \"kubernetes.io/projected/66b98d4c-dd1c-49a7-97a5-fab5e138fefd-kube-api-access-t8c75\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.985769 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.986179 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv" event={"ID":"66b98d4c-dd1c-49a7-97a5-fab5e138fefd","Type":"ContainerDied","Data":"4c5041f849033e26dc73255f2f27d76318d8b4d9753a36f81e66f0ddffd6a344"} Nov 24 21:45:03 crc kubenswrapper[4767]: I1124 21:45:03.986226 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5041f849033e26dc73255f2f27d76318d8b4d9753a36f81e66f0ddffd6a344" Nov 24 21:45:05 crc kubenswrapper[4767]: I1124 21:45:05.481377 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:45:05 crc kubenswrapper[4767]: I1124 21:45:05.481985 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:45:05 crc kubenswrapper[4767]: I1124 21:45:05.482179 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:45:05 crc kubenswrapper[4767]: I1124 21:45:05.483673 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"318061ec20e01e7b9e6b9071eca399b8371f6aa151e176eee69db149828d7014"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:45:05 crc kubenswrapper[4767]: I1124 21:45:05.483796 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://318061ec20e01e7b9e6b9071eca399b8371f6aa151e176eee69db149828d7014" gracePeriod=600 Nov 24 21:45:06 crc kubenswrapper[4767]: I1124 21:45:06.001612 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="318061ec20e01e7b9e6b9071eca399b8371f6aa151e176eee69db149828d7014" exitCode=0 Nov 24 21:45:06 crc kubenswrapper[4767]: I1124 21:45:06.001729 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"318061ec20e01e7b9e6b9071eca399b8371f6aa151e176eee69db149828d7014"} Nov 24 21:45:06 crc kubenswrapper[4767]: I1124 21:45:06.002060 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"be42d6aff78e041edb5424f488e6dd92a88fa38a755f0e75223f00653906bf6d"} Nov 24 21:45:06 crc kubenswrapper[4767]: I1124 21:45:06.002078 4767 scope.go:117] "RemoveContainer" containerID="860501fedaf155dd1c590174b7223e5f86047a1b3959cda0fd2750e8dc78aac9" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.683752 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nbd9k"] Nov 24 21:45:47 crc kubenswrapper[4767]: E1124 21:45:47.684679 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b98d4c-dd1c-49a7-97a5-fab5e138fefd" containerName="collect-profiles" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.684697 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b98d4c-dd1c-49a7-97a5-fab5e138fefd" containerName="collect-profiles" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.684825 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b98d4c-dd1c-49a7-97a5-fab5e138fefd" containerName="collect-profiles" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.685260 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.691451 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nbd9k"] Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.717323 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-bound-sa-token\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.717626 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/659dcfdd-e784-41ea-840f-a0c183639676-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.717829 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-registry-tls\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.717871 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/659dcfdd-e784-41ea-840f-a0c183639676-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.717930 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdgn6\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-kube-api-access-gdgn6\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.717955 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/659dcfdd-e784-41ea-840f-a0c183639676-registry-certificates\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.717975 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.718014 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/659dcfdd-e784-41ea-840f-a0c183639676-trusted-ca\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.739215 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.818704 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/659dcfdd-e784-41ea-840f-a0c183639676-trusted-ca\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.818776 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-bound-sa-token\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.818816 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/659dcfdd-e784-41ea-840f-a0c183639676-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.818836 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-registry-tls\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.818859 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/659dcfdd-e784-41ea-840f-a0c183639676-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.818880 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdgn6\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-kube-api-access-gdgn6\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.818905 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/659dcfdd-e784-41ea-840f-a0c183639676-registry-certificates\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.819750 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/659dcfdd-e784-41ea-840f-a0c183639676-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.820243 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/659dcfdd-e784-41ea-840f-a0c183639676-registry-certificates\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.821060 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/659dcfdd-e784-41ea-840f-a0c183639676-trusted-ca\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.824820 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-registry-tls\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.829960 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/659dcfdd-e784-41ea-840f-a0c183639676-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.835032 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdgn6\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-kube-api-access-gdgn6\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:47 crc kubenswrapper[4767]: I1124 21:45:47.835951 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/659dcfdd-e784-41ea-840f-a0c183639676-bound-sa-token\") pod \"image-registry-66df7c8f76-nbd9k\" (UID: \"659dcfdd-e784-41ea-840f-a0c183639676\") " pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:48 crc kubenswrapper[4767]: I1124 21:45:48.007454 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:48 crc kubenswrapper[4767]: I1124 21:45:48.195103 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nbd9k"] Nov 24 21:45:48 crc kubenswrapper[4767]: I1124 21:45:48.284300 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" event={"ID":"659dcfdd-e784-41ea-840f-a0c183639676","Type":"ContainerStarted","Data":"6b3d57a8ad7b310a1309ed02c52af6db8a96d645d1bbfdae139fc6e8ce5225d3"} Nov 24 21:45:49 crc kubenswrapper[4767]: I1124 21:45:49.294015 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" event={"ID":"659dcfdd-e784-41ea-840f-a0c183639676","Type":"ContainerStarted","Data":"07c9139d6666376f7a92ee938712adf2f91aa5b30d2a23d7c953e2a9171e1f73"} Nov 24 21:45:49 crc kubenswrapper[4767]: I1124 21:45:49.294513 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:45:49 crc kubenswrapper[4767]: I1124 21:45:49.322756 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" podStartSLOduration=2.3227281619999998 podStartE2EDuration="2.322728162s" podCreationTimestamp="2025-11-24 21:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:49.31636322 +0000 UTC m=+432.233346672" watchObservedRunningTime="2025-11-24 21:45:49.322728162 +0000 UTC m=+432.239711564" Nov 24 21:46:08 crc kubenswrapper[4767]: I1124 21:46:08.012078 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nbd9k" Nov 24 21:46:08 crc kubenswrapper[4767]: I1124 21:46:08.081138 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck7c4"] Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.154856 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" podUID="04c820d8-acd5-42ce-8c38-7027eae3d43d" containerName="registry" containerID="cri-o://fdd60e36f4e6b452c4383406d2965886fe0c8870779408b49aed615c2f37447e" gracePeriod=30 Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.580100 4767 generic.go:334] "Generic (PLEG): container finished" podID="04c820d8-acd5-42ce-8c38-7027eae3d43d" containerID="fdd60e36f4e6b452c4383406d2965886fe0c8870779408b49aed615c2f37447e" exitCode=0 Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.580194 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" event={"ID":"04c820d8-acd5-42ce-8c38-7027eae3d43d","Type":"ContainerDied","Data":"fdd60e36f4e6b452c4383406d2965886fe0c8870779408b49aed615c2f37447e"} Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.580607 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" event={"ID":"04c820d8-acd5-42ce-8c38-7027eae3d43d","Type":"ContainerDied","Data":"44a2644a54043ffd72531d9e2cd762d3c3e53bfcf8133864143c2967731eefeb"} Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.580638 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a2644a54043ffd72531d9e2cd762d3c3e53bfcf8133864143c2967731eefeb" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.620999 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.711748 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-bound-sa-token\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.711797 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04c820d8-acd5-42ce-8c38-7027eae3d43d-installation-pull-secrets\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.711821 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-tls\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.711837 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-certificates\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.711934 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.711965 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjl9j\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-kube-api-access-zjl9j\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.711989 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-trusted-ca\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.712011 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04c820d8-acd5-42ce-8c38-7027eae3d43d-ca-trust-extracted\") pod \"04c820d8-acd5-42ce-8c38-7027eae3d43d\" (UID: \"04c820d8-acd5-42ce-8c38-7027eae3d43d\") " Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.712810 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.712852 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.721034 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.721090 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c820d8-acd5-42ce-8c38-7027eae3d43d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.722172 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.724574 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-kube-api-access-zjl9j" (OuterVolumeSpecName: "kube-api-access-zjl9j") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "kube-api-access-zjl9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.724622 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.729559 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c820d8-acd5-42ce-8c38-7027eae3d43d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "04c820d8-acd5-42ce-8c38-7027eae3d43d" (UID: "04c820d8-acd5-42ce-8c38-7027eae3d43d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.813400 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.813731 4767 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04c820d8-acd5-42ce-8c38-7027eae3d43d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.813751 4767 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.813768 4767 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.813785 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjl9j\" (UniqueName: \"kubernetes.io/projected/04c820d8-acd5-42ce-8c38-7027eae3d43d-kube-api-access-zjl9j\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.813801 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04c820d8-acd5-42ce-8c38-7027eae3d43d-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:33 crc kubenswrapper[4767]: I1124 21:46:33.813817 4767 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04c820d8-acd5-42ce-8c38-7027eae3d43d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:34 crc kubenswrapper[4767]: I1124 21:46:34.585103 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck7c4" Nov 24 21:46:34 crc kubenswrapper[4767]: I1124 21:46:34.600322 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck7c4"] Nov 24 21:46:34 crc kubenswrapper[4767]: I1124 21:46:34.603687 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck7c4"] Nov 24 21:46:36 crc kubenswrapper[4767]: I1124 21:46:36.326744 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c820d8-acd5-42ce-8c38-7027eae3d43d" path="/var/lib/kubelet/pods/04c820d8-acd5-42ce-8c38-7027eae3d43d/volumes" Nov 24 21:47:05 crc kubenswrapper[4767]: I1124 21:47:05.488377 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:47:05 crc kubenswrapper[4767]: I1124 21:47:05.489214 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:47:35 crc kubenswrapper[4767]: I1124 21:47:35.481906 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:47:35 crc kubenswrapper[4767]: I1124 21:47:35.482583 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:47:38 crc kubenswrapper[4767]: I1124 21:47:38.484123 4767 scope.go:117] "RemoveContainer" containerID="fdd60e36f4e6b452c4383406d2965886fe0c8870779408b49aed615c2f37447e" Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.481593 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.482499 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.482582 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.483551 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be42d6aff78e041edb5424f488e6dd92a88fa38a755f0e75223f00653906bf6d"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.483636 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://be42d6aff78e041edb5424f488e6dd92a88fa38a755f0e75223f00653906bf6d" gracePeriod=600 Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.882396 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="be42d6aff78e041edb5424f488e6dd92a88fa38a755f0e75223f00653906bf6d" exitCode=0 Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.882431 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"be42d6aff78e041edb5424f488e6dd92a88fa38a755f0e75223f00653906bf6d"} Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.882488 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"5c376cc0e5d0460b519433b94fced4d0cba810050689003c18c581dd720c940d"} Nov 24 21:48:05 crc kubenswrapper[4767]: I1124 21:48:05.882508 4767 scope.go:117] "RemoveContainer" containerID="318061ec20e01e7b9e6b9071eca399b8371f6aa151e176eee69db149828d7014" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.824328 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-pj7sv"] Nov 24 21:48:07 crc kubenswrapper[4767]: E1124 21:48:07.825130 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c820d8-acd5-42ce-8c38-7027eae3d43d" containerName="registry" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.825147 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c820d8-acd5-42ce-8c38-7027eae3d43d" containerName="registry" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.825305 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c820d8-acd5-42ce-8c38-7027eae3d43d" containerName="registry" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.825839 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.827837 4767 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-n29gb" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.831630 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.831861 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.833321 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-4snnn"] Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.834196 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-4snnn" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.836697 4767 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pb9cn" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.847556 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-pj7sv"] Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.849807 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-4snnn"] Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.887177 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9rlpk"] Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.888847 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.891152 4767 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cjt6s" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.891534 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9rlpk"] Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.970413 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgsvk\" (UniqueName: \"kubernetes.io/projected/ec144c5d-54dc-44b7-ab5a-e79db52a31d4-kube-api-access-bgsvk\") pod \"cert-manager-cainjector-7f985d654d-pj7sv\" (UID: \"ec144c5d-54dc-44b7-ab5a-e79db52a31d4\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" Nov 24 21:48:07 crc kubenswrapper[4767]: I1124 21:48:07.970498 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knk9j\" (UniqueName: \"kubernetes.io/projected/8b86c481-27ba-4661-9456-6d0c2c37e707-kube-api-access-knk9j\") pod \"cert-manager-5b446d88c5-4snnn\" (UID: \"8b86c481-27ba-4661-9456-6d0c2c37e707\") " pod="cert-manager/cert-manager-5b446d88c5-4snnn" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.076456 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhm6f\" (UniqueName: \"kubernetes.io/projected/1fef5731-47c0-449f-b861-14eb7d3bbb32-kube-api-access-hhm6f\") pod \"cert-manager-webhook-5655c58dd6-9rlpk\" (UID: \"1fef5731-47c0-449f-b861-14eb7d3bbb32\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.076546 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knk9j\" (UniqueName: \"kubernetes.io/projected/8b86c481-27ba-4661-9456-6d0c2c37e707-kube-api-access-knk9j\") pod \"cert-manager-5b446d88c5-4snnn\" (UID: \"8b86c481-27ba-4661-9456-6d0c2c37e707\") " pod="cert-manager/cert-manager-5b446d88c5-4snnn" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.076674 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgsvk\" (UniqueName: \"kubernetes.io/projected/ec144c5d-54dc-44b7-ab5a-e79db52a31d4-kube-api-access-bgsvk\") pod \"cert-manager-cainjector-7f985d654d-pj7sv\" (UID: \"ec144c5d-54dc-44b7-ab5a-e79db52a31d4\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.097154 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgsvk\" (UniqueName: \"kubernetes.io/projected/ec144c5d-54dc-44b7-ab5a-e79db52a31d4-kube-api-access-bgsvk\") pod \"cert-manager-cainjector-7f985d654d-pj7sv\" (UID: \"ec144c5d-54dc-44b7-ab5a-e79db52a31d4\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.098052 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knk9j\" (UniqueName: \"kubernetes.io/projected/8b86c481-27ba-4661-9456-6d0c2c37e707-kube-api-access-knk9j\") pod \"cert-manager-5b446d88c5-4snnn\" (UID: \"8b86c481-27ba-4661-9456-6d0c2c37e707\") " pod="cert-manager/cert-manager-5b446d88c5-4snnn" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.149726 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.162924 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-4snnn" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.177860 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhm6f\" (UniqueName: \"kubernetes.io/projected/1fef5731-47c0-449f-b861-14eb7d3bbb32-kube-api-access-hhm6f\") pod \"cert-manager-webhook-5655c58dd6-9rlpk\" (UID: \"1fef5731-47c0-449f-b861-14eb7d3bbb32\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.196067 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhm6f\" (UniqueName: \"kubernetes.io/projected/1fef5731-47c0-449f-b861-14eb7d3bbb32-kube-api-access-hhm6f\") pod \"cert-manager-webhook-5655c58dd6-9rlpk\" (UID: \"1fef5731-47c0-449f-b861-14eb7d3bbb32\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.220705 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.411964 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-4snnn"] Nov 24 21:48:08 crc kubenswrapper[4767]: W1124 21:48:08.422986 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b86c481_27ba_4661_9456_6d0c2c37e707.slice/crio-ae36f128cd4dd804da79f3d6b4bc86440a8b67dfa114d3a207f1f436464fda31 WatchSource:0}: Error finding container ae36f128cd4dd804da79f3d6b4bc86440a8b67dfa114d3a207f1f436464fda31: Status 404 returned error can't find the container with id ae36f128cd4dd804da79f3d6b4bc86440a8b67dfa114d3a207f1f436464fda31 Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.425828 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.447818 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9rlpk"] Nov 24 21:48:08 crc kubenswrapper[4767]: W1124 21:48:08.451367 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fef5731_47c0_449f_b861_14eb7d3bbb32.slice/crio-7e85036ed91991490440fdd4f76eddb659190308981c317b91fde14bf08388e6 WatchSource:0}: Error finding container 7e85036ed91991490440fdd4f76eddb659190308981c317b91fde14bf08388e6: Status 404 returned error can't find the container with id 7e85036ed91991490440fdd4f76eddb659190308981c317b91fde14bf08388e6 Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.578019 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-pj7sv"] Nov 24 21:48:08 crc kubenswrapper[4767]: W1124 21:48:08.585042 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec144c5d_54dc_44b7_ab5a_e79db52a31d4.slice/crio-8955b9076a7f9833fa76057126386b352096da22468d6b36cbab11ac59407885 WatchSource:0}: Error finding container 8955b9076a7f9833fa76057126386b352096da22468d6b36cbab11ac59407885: Status 404 returned error can't find the container with id 8955b9076a7f9833fa76057126386b352096da22468d6b36cbab11ac59407885 Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.916033 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" event={"ID":"1fef5731-47c0-449f-b861-14eb7d3bbb32","Type":"ContainerStarted","Data":"7e85036ed91991490440fdd4f76eddb659190308981c317b91fde14bf08388e6"} Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.916789 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" event={"ID":"ec144c5d-54dc-44b7-ab5a-e79db52a31d4","Type":"ContainerStarted","Data":"8955b9076a7f9833fa76057126386b352096da22468d6b36cbab11ac59407885"} Nov 24 21:48:08 crc kubenswrapper[4767]: I1124 21:48:08.918112 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-4snnn" event={"ID":"8b86c481-27ba-4661-9456-6d0c2c37e707","Type":"ContainerStarted","Data":"ae36f128cd4dd804da79f3d6b4bc86440a8b67dfa114d3a207f1f436464fda31"} Nov 24 21:48:11 crc kubenswrapper[4767]: I1124 21:48:11.941314 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" event={"ID":"1fef5731-47c0-449f-b861-14eb7d3bbb32","Type":"ContainerStarted","Data":"13f54857da703743f8e6bc8c32d8105b306e9d205fcf2e1902231698923e6ad7"} Nov 24 21:48:11 crc kubenswrapper[4767]: I1124 21:48:11.941735 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" Nov 24 21:48:11 crc kubenswrapper[4767]: I1124 21:48:11.956697 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" podStartSLOduration=2.493348874 podStartE2EDuration="4.956680687s" podCreationTimestamp="2025-11-24 21:48:07 +0000 UTC" firstStartedPulling="2025-11-24 21:48:08.453345483 +0000 UTC m=+571.370328865" lastFinishedPulling="2025-11-24 21:48:10.916677306 +0000 UTC m=+573.833660678" observedRunningTime="2025-11-24 21:48:11.955245905 +0000 UTC m=+574.872229277" watchObservedRunningTime="2025-11-24 21:48:11.956680687 +0000 UTC m=+574.873664049" Nov 24 21:48:12 crc kubenswrapper[4767]: I1124 21:48:12.948082 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-4snnn" event={"ID":"8b86c481-27ba-4661-9456-6d0c2c37e707","Type":"ContainerStarted","Data":"bf5f41997b36fef1b682ce6677868ca76ee7913f344769aeb0fa45c5206c700b"} Nov 24 21:48:12 crc kubenswrapper[4767]: I1124 21:48:12.949839 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" event={"ID":"ec144c5d-54dc-44b7-ab5a-e79db52a31d4","Type":"ContainerStarted","Data":"a84c986d1d9aed245def9e2a676d39c0a13ca0560af940ff0d47343aeef388ec"} Nov 24 21:48:12 crc kubenswrapper[4767]: I1124 21:48:12.962706 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-4snnn" podStartSLOduration=2.055799842 podStartE2EDuration="5.962687142s" podCreationTimestamp="2025-11-24 21:48:07 +0000 UTC" firstStartedPulling="2025-11-24 21:48:08.425660108 +0000 UTC m=+571.342643470" lastFinishedPulling="2025-11-24 21:48:12.332547388 +0000 UTC m=+575.249530770" observedRunningTime="2025-11-24 21:48:12.960992963 +0000 UTC m=+575.877976335" watchObservedRunningTime="2025-11-24 21:48:12.962687142 +0000 UTC m=+575.879670514" Nov 24 21:48:12 crc kubenswrapper[4767]: I1124 21:48:12.978327 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-pj7sv" podStartSLOduration=2.227866127 podStartE2EDuration="5.978307536s" podCreationTimestamp="2025-11-24 21:48:07 +0000 UTC" firstStartedPulling="2025-11-24 21:48:08.58698705 +0000 UTC m=+571.503970432" lastFinishedPulling="2025-11-24 21:48:12.337428459 +0000 UTC m=+575.254411841" observedRunningTime="2025-11-24 21:48:12.975495664 +0000 UTC m=+575.892479046" watchObservedRunningTime="2025-11-24 21:48:12.978307536 +0000 UTC m=+575.895290908" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.224836 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-9rlpk" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.287763 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ll767"] Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.288378 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-controller" containerID="cri-o://3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.288476 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="nbdb" containerID="cri-o://fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.288530 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-node" containerID="cri-o://5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.288543 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.288596 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-acl-logging" containerID="cri-o://4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.288606 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="northd" containerID="cri-o://4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.288819 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="sbdb" containerID="cri-o://6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.354032 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" containerID="cri-o://e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" gracePeriod=30 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.628369 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/3.log" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.630820 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovn-acl-logging/0.log" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.634225 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovn-controller/0.log" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.634705 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.687748 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tr5nq"] Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.687975 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kubecfg-setup" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.687989 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kubecfg-setup" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688002 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="sbdb" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688011 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="sbdb" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688019 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688025 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688031 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688036 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688048 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="nbdb" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688055 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="nbdb" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688065 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688073 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688082 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-node" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688089 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-node" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688101 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688108 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688116 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-acl-logging" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688121 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-acl-logging" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688133 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688139 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688147 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="northd" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688152 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="northd" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688246 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688253 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="northd" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688259 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688285 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688293 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688300 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688307 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688316 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="sbdb" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688324 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688332 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="kube-rbac-proxy-node" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688338 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="nbdb" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688346 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovn-acl-logging" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688432 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688439 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.688446 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.688452 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" containerName="ovnkube-controller" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.690039 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732391 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-ovn\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732440 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-systemd\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732467 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-netns\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732504 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41f27727-62e4-4386-a459-b26e471e1c0a-ovn-node-metrics-cert\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732530 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-ovn-kubernetes\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732542 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732561 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-config\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732584 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-bin\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732588 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732587 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732607 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-script-lib\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732624 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732642 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-var-lib-openvswitch\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732667 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-openvswitch\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732692 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-kubelet\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732708 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-slash\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732732 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-node-log\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732757 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-log-socket\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732791 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-etc-openvswitch\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732819 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-env-overrides\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732846 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732869 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-systemd-units\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732895 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-netd\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.732930 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff52b\" (UniqueName: \"kubernetes.io/projected/41f27727-62e4-4386-a459-b26e471e1c0a-kube-api-access-ff52b\") pod \"41f27727-62e4-4386-a459-b26e471e1c0a\" (UID: \"41f27727-62e4-4386-a459-b26e471e1c0a\") " Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733040 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733058 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733066 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovnkube-script-lib\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733093 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tslkh\" (UniqueName: \"kubernetes.io/projected/b8e8a610-20bf-4a67-99f3-b6940f2b4242-kube-api-access-tslkh\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733121 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-node-log\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733146 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-systemd\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733073 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-log-socket" (OuterVolumeSpecName: "log-socket") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733091 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733096 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733108 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733114 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733124 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733139 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-slash" (OuterVolumeSpecName: "host-slash") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733152 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-node-log" (OuterVolumeSpecName: "node-log") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733164 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733242 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-cni-bin\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733326 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-cni-netd\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733357 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-systemd-units\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733364 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733386 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733381 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-slash\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733413 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733449 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-ovn\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733487 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733587 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovn-node-metrics-cert\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733625 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovnkube-config\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733661 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-var-lib-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733684 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-log-socket\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733706 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-run-netns\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733735 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-env-overrides\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733763 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-run-ovn-kubernetes\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733814 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-kubelet\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733840 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-etc-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733900 4767 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733916 4767 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733930 4767 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733943 4767 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733956 4767 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733986 4767 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.733999 4767 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734022 4767 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734040 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734051 4767 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734062 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41f27727-62e4-4386-a459-b26e471e1c0a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734074 4767 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734084 4767 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734096 4767 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734105 4767 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734116 4767 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.734126 4767 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.737776 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f27727-62e4-4386-a459-b26e471e1c0a-kube-api-access-ff52b" (OuterVolumeSpecName: "kube-api-access-ff52b") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "kube-api-access-ff52b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.738036 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f27727-62e4-4386-a459-b26e471e1c0a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.753924 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "41f27727-62e4-4386-a459-b26e471e1c0a" (UID: "41f27727-62e4-4386-a459-b26e471e1c0a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.834977 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovnkube-config\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835032 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-var-lib-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835058 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-log-socket\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835080 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-run-netns\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835112 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-env-overrides\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835143 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-run-ovn-kubernetes\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835154 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-var-lib-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835186 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-kubelet\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835193 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-log-socket\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835217 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-run-ovn-kubernetes\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835220 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-etc-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835205 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-run-netns\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835248 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-kubelet\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835328 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-etc-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835436 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovnkube-script-lib\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835468 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tslkh\" (UniqueName: \"kubernetes.io/projected/b8e8a610-20bf-4a67-99f3-b6940f2b4242-kube-api-access-tslkh\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835497 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-node-log\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835527 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-systemd\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835549 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835620 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-systemd\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.835677 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-openvswitch\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836057 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-cni-bin\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836104 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-cni-netd\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836137 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-systemd-units\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836171 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-slash\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836204 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-cni-bin\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836262 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-systemd-units\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836228 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-ovn\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836326 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-run-ovn\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836369 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovnkube-config\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836367 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-node-log\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836386 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836387 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-slash\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836476 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836574 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovn-node-metrics-cert\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836735 4767 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41f27727-62e4-4386-a459-b26e471e1c0a-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836776 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41f27727-62e4-4386-a459-b26e471e1c0a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836801 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff52b\" (UniqueName: \"kubernetes.io/projected/41f27727-62e4-4386-a459-b26e471e1c0a-kube-api-access-ff52b\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.836982 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovnkube-script-lib\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.840144 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e8a610-20bf-4a67-99f3-b6940f2b4242-ovn-node-metrics-cert\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.841636 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e8a610-20bf-4a67-99f3-b6940f2b4242-env-overrides\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.844872 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e8a610-20bf-4a67-99f3-b6940f2b4242-host-cni-netd\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.860812 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tslkh\" (UniqueName: \"kubernetes.io/projected/b8e8a610-20bf-4a67-99f3-b6940f2b4242-kube-api-access-tslkh\") pod \"ovnkube-node-tr5nq\" (UID: \"b8e8a610-20bf-4a67-99f3-b6940f2b4242\") " pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.997530 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/2.log" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.998061 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/1.log" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.998127 4767 generic.go:334] "Generic (PLEG): container finished" podID="f45850ec-6094-4a27-aa04-a35c002e6160" containerID="c11a97772c03bf0d654128f5785bea0e4460acc7aefb2bed6c6a691b0be41a53" exitCode=2 Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.998222 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerDied","Data":"c11a97772c03bf0d654128f5785bea0e4460acc7aefb2bed6c6a691b0be41a53"} Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.998292 4767 scope.go:117] "RemoveContainer" containerID="702fb7fd705d31c22ceba5e85eb1e1415b9a1c379ee5366b70e09dfe83653de2" Nov 24 21:48:18 crc kubenswrapper[4767]: I1124 21:48:18.998738 4767 scope.go:117] "RemoveContainer" containerID="c11a97772c03bf0d654128f5785bea0e4460acc7aefb2bed6c6a691b0be41a53" Nov 24 21:48:18 crc kubenswrapper[4767]: E1124 21:48:18.998932 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gnz8t_openshift-multus(f45850ec-6094-4a27-aa04-a35c002e6160)\"" pod="openshift-multus/multus-gnz8t" podUID="f45850ec-6094-4a27-aa04-a35c002e6160" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.001670 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.006561 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovnkube-controller/3.log" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.009659 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovn-acl-logging/0.log" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010295 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ll767_41f27727-62e4-4386-a459-b26e471e1c0a/ovn-controller/0.log" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010679 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" exitCode=0 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010704 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" exitCode=0 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010715 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" exitCode=0 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010726 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" exitCode=0 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010734 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" exitCode=0 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010745 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" exitCode=0 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010753 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" exitCode=143 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010761 4767 generic.go:334] "Generic (PLEG): container finished" podID="41f27727-62e4-4386-a459-b26e471e1c0a" containerID="3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" exitCode=143 Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010782 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010808 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010821 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010835 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010847 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010858 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010869 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010880 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010887 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010894 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010900 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010907 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010913 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010919 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010926 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010933 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010942 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010952 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010959 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010965 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010970 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010976 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010983 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010991 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.010998 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011005 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011011 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011020 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011031 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011039 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011068 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011075 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011081 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011088 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011094 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011100 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011107 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011113 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011122 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" event={"ID":"41f27727-62e4-4386-a459-b26e471e1c0a","Type":"ContainerDied","Data":"67595d8270f2306b0a29b7b4225fafcd2d0c3a6741c5e8637559f5c5e43eed8e"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011133 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011141 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011148 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011155 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011162 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011169 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011175 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011182 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011188 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011194 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.011316 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ll767" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.052396 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ll767"] Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.059728 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ll767"] Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.067087 4767 scope.go:117] "RemoveContainer" containerID="e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.092458 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.116476 4767 scope.go:117] "RemoveContainer" containerID="6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.134541 4767 scope.go:117] "RemoveContainer" containerID="fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.152317 4767 scope.go:117] "RemoveContainer" containerID="4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.173901 4767 scope.go:117] "RemoveContainer" containerID="2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.189910 4767 scope.go:117] "RemoveContainer" containerID="5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.205803 4767 scope.go:117] "RemoveContainer" containerID="4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.223459 4767 scope.go:117] "RemoveContainer" containerID="3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.291005 4767 scope.go:117] "RemoveContainer" containerID="b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.307653 4767 scope.go:117] "RemoveContainer" containerID="e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.308226 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": container with ID starting with e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380 not found: ID does not exist" containerID="e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.308330 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} err="failed to get container status \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": rpc error: code = NotFound desc = could not find container \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": container with ID starting with e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.308365 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.308822 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": container with ID starting with 555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469 not found: ID does not exist" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.308859 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} err="failed to get container status \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": rpc error: code = NotFound desc = could not find container \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": container with ID starting with 555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.308888 4767 scope.go:117] "RemoveContainer" containerID="6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.309236 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": container with ID starting with 6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844 not found: ID does not exist" containerID="6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.309291 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} err="failed to get container status \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": rpc error: code = NotFound desc = could not find container \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": container with ID starting with 6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.309315 4767 scope.go:117] "RemoveContainer" containerID="fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.309666 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": container with ID starting with fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9 not found: ID does not exist" containerID="fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.309699 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} err="failed to get container status \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": rpc error: code = NotFound desc = could not find container \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": container with ID starting with fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.309723 4767 scope.go:117] "RemoveContainer" containerID="4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.310061 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": container with ID starting with 4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6 not found: ID does not exist" containerID="4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.310093 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} err="failed to get container status \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": rpc error: code = NotFound desc = could not find container \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": container with ID starting with 4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.310110 4767 scope.go:117] "RemoveContainer" containerID="2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.310536 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": container with ID starting with 2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194 not found: ID does not exist" containerID="2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.310569 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} err="failed to get container status \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": rpc error: code = NotFound desc = could not find container \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": container with ID starting with 2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.310594 4767 scope.go:117] "RemoveContainer" containerID="5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.310918 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": container with ID starting with 5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2 not found: ID does not exist" containerID="5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.310947 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} err="failed to get container status \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": rpc error: code = NotFound desc = could not find container \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": container with ID starting with 5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.310964 4767 scope.go:117] "RemoveContainer" containerID="4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.311231 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": container with ID starting with 4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291 not found: ID does not exist" containerID="4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.311300 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} err="failed to get container status \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": rpc error: code = NotFound desc = could not find container \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": container with ID starting with 4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.311317 4767 scope.go:117] "RemoveContainer" containerID="3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.311639 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": container with ID starting with 3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d not found: ID does not exist" containerID="3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.311665 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} err="failed to get container status \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": rpc error: code = NotFound desc = could not find container \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": container with ID starting with 3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.311682 4767 scope.go:117] "RemoveContainer" containerID="b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b" Nov 24 21:48:19 crc kubenswrapper[4767]: E1124 21:48:19.311992 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": container with ID starting with b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b not found: ID does not exist" containerID="b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.312025 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} err="failed to get container status \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": rpc error: code = NotFound desc = could not find container \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": container with ID starting with b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.312049 4767 scope.go:117] "RemoveContainer" containerID="e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.312508 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} err="failed to get container status \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": rpc error: code = NotFound desc = could not find container \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": container with ID starting with e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.312557 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.312899 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} err="failed to get container status \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": rpc error: code = NotFound desc = could not find container \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": container with ID starting with 555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.312929 4767 scope.go:117] "RemoveContainer" containerID="6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.313289 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} err="failed to get container status \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": rpc error: code = NotFound desc = could not find container \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": container with ID starting with 6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.313328 4767 scope.go:117] "RemoveContainer" containerID="fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.313727 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} err="failed to get container status \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": rpc error: code = NotFound desc = could not find container \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": container with ID starting with fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.313755 4767 scope.go:117] "RemoveContainer" containerID="4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.314140 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} err="failed to get container status \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": rpc error: code = NotFound desc = could not find container \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": container with ID starting with 4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.314165 4767 scope.go:117] "RemoveContainer" containerID="2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.314583 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} err="failed to get container status \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": rpc error: code = NotFound desc = could not find container \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": container with ID starting with 2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.314610 4767 scope.go:117] "RemoveContainer" containerID="5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.314960 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} err="failed to get container status \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": rpc error: code = NotFound desc = could not find container \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": container with ID starting with 5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.315010 4767 scope.go:117] "RemoveContainer" containerID="4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.315498 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} err="failed to get container status \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": rpc error: code = NotFound desc = could not find container \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": container with ID starting with 4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.315527 4767 scope.go:117] "RemoveContainer" containerID="3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.315823 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} err="failed to get container status \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": rpc error: code = NotFound desc = could not find container \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": container with ID starting with 3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.315853 4767 scope.go:117] "RemoveContainer" containerID="b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.316309 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} err="failed to get container status \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": rpc error: code = NotFound desc = could not find container \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": container with ID starting with b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.316336 4767 scope.go:117] "RemoveContainer" containerID="e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.316772 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} err="failed to get container status \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": rpc error: code = NotFound desc = could not find container \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": container with ID starting with e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.316839 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.317261 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} err="failed to get container status \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": rpc error: code = NotFound desc = could not find container \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": container with ID starting with 555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.317300 4767 scope.go:117] "RemoveContainer" containerID="6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.317832 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} err="failed to get container status \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": rpc error: code = NotFound desc = could not find container \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": container with ID starting with 6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.317862 4767 scope.go:117] "RemoveContainer" containerID="fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.318292 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} err="failed to get container status \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": rpc error: code = NotFound desc = could not find container \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": container with ID starting with fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.318320 4767 scope.go:117] "RemoveContainer" containerID="4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.318645 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} err="failed to get container status \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": rpc error: code = NotFound desc = could not find container \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": container with ID starting with 4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.318682 4767 scope.go:117] "RemoveContainer" containerID="2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.319090 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} err="failed to get container status \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": rpc error: code = NotFound desc = could not find container \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": container with ID starting with 2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.319130 4767 scope.go:117] "RemoveContainer" containerID="5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.319505 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} err="failed to get container status \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": rpc error: code = NotFound desc = could not find container \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": container with ID starting with 5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.319546 4767 scope.go:117] "RemoveContainer" containerID="4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.319888 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} err="failed to get container status \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": rpc error: code = NotFound desc = could not find container \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": container with ID starting with 4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.319928 4767 scope.go:117] "RemoveContainer" containerID="3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.320339 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} err="failed to get container status \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": rpc error: code = NotFound desc = could not find container \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": container with ID starting with 3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.320365 4767 scope.go:117] "RemoveContainer" containerID="b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.320779 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} err="failed to get container status \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": rpc error: code = NotFound desc = could not find container \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": container with ID starting with b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.320810 4767 scope.go:117] "RemoveContainer" containerID="e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.321254 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380"} err="failed to get container status \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": rpc error: code = NotFound desc = could not find container \"e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380\": container with ID starting with e604df39725ba124e6436f9d6235a60d324395e6e09e91dd43d3fff8a92ce380 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.321321 4767 scope.go:117] "RemoveContainer" containerID="555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.321653 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469"} err="failed to get container status \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": rpc error: code = NotFound desc = could not find container \"555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469\": container with ID starting with 555bb888f5f8ca50705e8b7b4d4a014fed2c7597d80626f34df50cc640f61469 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.321679 4767 scope.go:117] "RemoveContainer" containerID="6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.322022 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844"} err="failed to get container status \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": rpc error: code = NotFound desc = could not find container \"6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844\": container with ID starting with 6d4d3eb37c021643a0c8d639b6937c2e14c1e165cdc288f6e6a64423b28d6844 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.322047 4767 scope.go:117] "RemoveContainer" containerID="fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.322545 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9"} err="failed to get container status \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": rpc error: code = NotFound desc = could not find container \"fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9\": container with ID starting with fbec53d215aab189f670c9c9c33e6ee75a9af81ff96ae000996924bceea34cb9 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.322778 4767 scope.go:117] "RemoveContainer" containerID="4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.323170 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6"} err="failed to get container status \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": rpc error: code = NotFound desc = could not find container \"4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6\": container with ID starting with 4a33ca6a08b055d6d149608a7155c3067ca5d61c48a279841ec7c175a3ddb5e6 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.323208 4767 scope.go:117] "RemoveContainer" containerID="2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.323631 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194"} err="failed to get container status \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": rpc error: code = NotFound desc = could not find container \"2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194\": container with ID starting with 2743e380b08fe87d4bb29d63057d70f63ce368538af276f0ddba3382143d8194 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.323658 4767 scope.go:117] "RemoveContainer" containerID="5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.324058 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2"} err="failed to get container status \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": rpc error: code = NotFound desc = could not find container \"5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2\": container with ID starting with 5cd4c8b387a438aa68479b723a77caf3042723f2f172b5276eb73a5e802d58b2 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.324092 4767 scope.go:117] "RemoveContainer" containerID="4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.324507 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291"} err="failed to get container status \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": rpc error: code = NotFound desc = could not find container \"4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291\": container with ID starting with 4f51dec232777d760308a36c1a7ede49dec636ffec36d8a0deeb26131f900291 not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.324541 4767 scope.go:117] "RemoveContainer" containerID="3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.324898 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d"} err="failed to get container status \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": rpc error: code = NotFound desc = could not find container \"3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d\": container with ID starting with 3438857414f1312562def427da557ecc8f107eb2176d622c3d09ae093834162d not found: ID does not exist" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.324924 4767 scope.go:117] "RemoveContainer" containerID="b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b" Nov 24 21:48:19 crc kubenswrapper[4767]: I1124 21:48:19.325318 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b"} err="failed to get container status \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": rpc error: code = NotFound desc = could not find container \"b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b\": container with ID starting with b6f522d19306cf00ff355d05425e79fe0c3f7fff2d174fb69321b83b18d9952b not found: ID does not exist" Nov 24 21:48:20 crc kubenswrapper[4767]: I1124 21:48:20.020315 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/2.log" Nov 24 21:48:20 crc kubenswrapper[4767]: I1124 21:48:20.022839 4767 generic.go:334] "Generic (PLEG): container finished" podID="b8e8a610-20bf-4a67-99f3-b6940f2b4242" containerID="5efb7dd8238dac92a6f5fe6f9b0c7fc974a0bc7adcb604eb6f4b230d2127081d" exitCode=0 Nov 24 21:48:20 crc kubenswrapper[4767]: I1124 21:48:20.022902 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerDied","Data":"5efb7dd8238dac92a6f5fe6f9b0c7fc974a0bc7adcb604eb6f4b230d2127081d"} Nov 24 21:48:20 crc kubenswrapper[4767]: I1124 21:48:20.022936 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"1b3745f4e5ce5171a9e7319ec64550ec54cbf2129038d44bfbd4fc00d50c4052"} Nov 24 21:48:20 crc kubenswrapper[4767]: I1124 21:48:20.321248 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f27727-62e4-4386-a459-b26e471e1c0a" path="/var/lib/kubelet/pods/41f27727-62e4-4386-a459-b26e471e1c0a/volumes" Nov 24 21:48:21 crc kubenswrapper[4767]: I1124 21:48:21.034229 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"95e1ccb5ac782bcc10fb017879835964c49a97ead4f1845ff49ea25f53c2e121"} Nov 24 21:48:21 crc kubenswrapper[4767]: I1124 21:48:21.034328 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"ecb4961aa180084939d406022724b47551c27122e389698b1bb28d38c4dc29b0"} Nov 24 21:48:21 crc kubenswrapper[4767]: I1124 21:48:21.034366 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"c2c9616f54f3dcbbfde742aade774a74632150d2851ce066c008a93b00e4d9ab"} Nov 24 21:48:21 crc kubenswrapper[4767]: I1124 21:48:21.034386 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"9a497986946a51e29138a6302b6946b767fb10c099e5036b848561cf65285ac2"} Nov 24 21:48:21 crc kubenswrapper[4767]: I1124 21:48:21.034404 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"e8a3ae15750858102693f4fe58e713d7e912c38c4acd14f3841f1c3379ece24d"} Nov 24 21:48:21 crc kubenswrapper[4767]: I1124 21:48:21.034423 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"665ab7251de9f34b7353965369ee4764d6dac6f763b5469cbcf5eebf851d1d3e"} Nov 24 21:48:23 crc kubenswrapper[4767]: I1124 21:48:23.067447 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"7f40237b9a08568adc83b4f1caa7cefc6124752311fdc8737ed2a0a4048e3268"} Nov 24 21:48:26 crc kubenswrapper[4767]: I1124 21:48:26.085078 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" event={"ID":"b8e8a610-20bf-4a67-99f3-b6940f2b4242","Type":"ContainerStarted","Data":"b8257c8927ddce78ad082b5877a5365fa0922f64b1199ad21346b951980c2599"} Nov 24 21:48:26 crc kubenswrapper[4767]: I1124 21:48:26.085649 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:26 crc kubenswrapper[4767]: I1124 21:48:26.085670 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:26 crc kubenswrapper[4767]: I1124 21:48:26.085683 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:26 crc kubenswrapper[4767]: I1124 21:48:26.123671 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:26 crc kubenswrapper[4767]: I1124 21:48:26.125182 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:26 crc kubenswrapper[4767]: I1124 21:48:26.126525 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" podStartSLOduration=8.126502364 podStartE2EDuration="8.126502364s" podCreationTimestamp="2025-11-24 21:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:48:26.118768989 +0000 UTC m=+589.035752391" watchObservedRunningTime="2025-11-24 21:48:26.126502364 +0000 UTC m=+589.043485736" Nov 24 21:48:29 crc kubenswrapper[4767]: I1124 21:48:29.313737 4767 scope.go:117] "RemoveContainer" containerID="c11a97772c03bf0d654128f5785bea0e4460acc7aefb2bed6c6a691b0be41a53" Nov 24 21:48:29 crc kubenswrapper[4767]: E1124 21:48:29.314527 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gnz8t_openshift-multus(f45850ec-6094-4a27-aa04-a35c002e6160)\"" pod="openshift-multus/multus-gnz8t" podUID="f45850ec-6094-4a27-aa04-a35c002e6160" Nov 24 21:48:44 crc kubenswrapper[4767]: I1124 21:48:44.313216 4767 scope.go:117] "RemoveContainer" containerID="c11a97772c03bf0d654128f5785bea0e4460acc7aefb2bed6c6a691b0be41a53" Nov 24 21:48:45 crc kubenswrapper[4767]: I1124 21:48:45.214884 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gnz8t_f45850ec-6094-4a27-aa04-a35c002e6160/kube-multus/2.log" Nov 24 21:48:45 crc kubenswrapper[4767]: I1124 21:48:45.215155 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnz8t" event={"ID":"f45850ec-6094-4a27-aa04-a35c002e6160","Type":"ContainerStarted","Data":"6c8ba4beb70fad83610da371b4b4be3b9f7a3f9b165bc13ea0d9688dbe6722eb"} Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.566865 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7"] Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.568035 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.570303 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.583605 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7"] Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.677198 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.677457 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.677780 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7lxq\" (UniqueName: \"kubernetes.io/projected/abb84f01-f1a5-4197-bac4-b109344281a8-kube-api-access-d7lxq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.778909 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.779075 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7lxq\" (UniqueName: \"kubernetes.io/projected/abb84f01-f1a5-4197-bac4-b109344281a8-kube-api-access-d7lxq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.779117 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.779764 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.779880 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.815176 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7lxq\" (UniqueName: \"kubernetes.io/projected/abb84f01-f1a5-4197-bac4-b109344281a8-kube-api-access-d7lxq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:46 crc kubenswrapper[4767]: I1124 21:48:46.890423 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:47 crc kubenswrapper[4767]: I1124 21:48:47.139477 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7"] Nov 24 21:48:47 crc kubenswrapper[4767]: I1124 21:48:47.225550 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" event={"ID":"abb84f01-f1a5-4197-bac4-b109344281a8","Type":"ContainerStarted","Data":"02b84b4a7c12dc558e1b18a0f81dc5f612b79b1632327ce827263275954731bf"} Nov 24 21:48:48 crc kubenswrapper[4767]: I1124 21:48:48.236262 4767 generic.go:334] "Generic (PLEG): container finished" podID="abb84f01-f1a5-4197-bac4-b109344281a8" containerID="9ea7cf4c36488dca171a9aaa0b1e3d91d5a6c8adfd95dcdf8bce0a84d97b2e10" exitCode=0 Nov 24 21:48:48 crc kubenswrapper[4767]: I1124 21:48:48.236386 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" event={"ID":"abb84f01-f1a5-4197-bac4-b109344281a8","Type":"ContainerDied","Data":"9ea7cf4c36488dca171a9aaa0b1e3d91d5a6c8adfd95dcdf8bce0a84d97b2e10"} Nov 24 21:48:49 crc kubenswrapper[4767]: I1124 21:48:49.071034 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tr5nq" Nov 24 21:48:50 crc kubenswrapper[4767]: I1124 21:48:50.250391 4767 generic.go:334] "Generic (PLEG): container finished" podID="abb84f01-f1a5-4197-bac4-b109344281a8" containerID="398a99d3764a04bd89134e6ff55cf451b69fae44f3675b72653be60e26bc9e17" exitCode=0 Nov 24 21:48:50 crc kubenswrapper[4767]: I1124 21:48:50.250493 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" event={"ID":"abb84f01-f1a5-4197-bac4-b109344281a8","Type":"ContainerDied","Data":"398a99d3764a04bd89134e6ff55cf451b69fae44f3675b72653be60e26bc9e17"} Nov 24 21:48:51 crc kubenswrapper[4767]: I1124 21:48:51.261303 4767 generic.go:334] "Generic (PLEG): container finished" podID="abb84f01-f1a5-4197-bac4-b109344281a8" containerID="5216db033f70d68a0be1bbe770b40469bd68d949cc507da436a993e71d1b28b9" exitCode=0 Nov 24 21:48:51 crc kubenswrapper[4767]: I1124 21:48:51.261418 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" event={"ID":"abb84f01-f1a5-4197-bac4-b109344281a8","Type":"ContainerDied","Data":"5216db033f70d68a0be1bbe770b40469bd68d949cc507da436a993e71d1b28b9"} Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.580641 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.664621 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-util\") pod \"abb84f01-f1a5-4197-bac4-b109344281a8\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.664869 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-bundle\") pod \"abb84f01-f1a5-4197-bac4-b109344281a8\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.665396 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7lxq\" (UniqueName: \"kubernetes.io/projected/abb84f01-f1a5-4197-bac4-b109344281a8-kube-api-access-d7lxq\") pod \"abb84f01-f1a5-4197-bac4-b109344281a8\" (UID: \"abb84f01-f1a5-4197-bac4-b109344281a8\") " Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.668506 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-bundle" (OuterVolumeSpecName: "bundle") pod "abb84f01-f1a5-4197-bac4-b109344281a8" (UID: "abb84f01-f1a5-4197-bac4-b109344281a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.675648 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb84f01-f1a5-4197-bac4-b109344281a8-kube-api-access-d7lxq" (OuterVolumeSpecName: "kube-api-access-d7lxq") pod "abb84f01-f1a5-4197-bac4-b109344281a8" (UID: "abb84f01-f1a5-4197-bac4-b109344281a8"). InnerVolumeSpecName "kube-api-access-d7lxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.689607 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-util" (OuterVolumeSpecName: "util") pod "abb84f01-f1a5-4197-bac4-b109344281a8" (UID: "abb84f01-f1a5-4197-bac4-b109344281a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.767519 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.767582 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abb84f01-f1a5-4197-bac4-b109344281a8-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:52 crc kubenswrapper[4767]: I1124 21:48:52.767598 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7lxq\" (UniqueName: \"kubernetes.io/projected/abb84f01-f1a5-4197-bac4-b109344281a8-kube-api-access-d7lxq\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:53 crc kubenswrapper[4767]: I1124 21:48:53.286866 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" event={"ID":"abb84f01-f1a5-4197-bac4-b109344281a8","Type":"ContainerDied","Data":"02b84b4a7c12dc558e1b18a0f81dc5f612b79b1632327ce827263275954731bf"} Nov 24 21:48:53 crc kubenswrapper[4767]: I1124 21:48:53.286926 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b84b4a7c12dc558e1b18a0f81dc5f612b79b1632327ce827263275954731bf" Nov 24 21:48:53 crc kubenswrapper[4767]: I1124 21:48:53.287029 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.147818 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md"] Nov 24 21:49:04 crc kubenswrapper[4767]: E1124 21:49:04.148620 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb84f01-f1a5-4197-bac4-b109344281a8" containerName="util" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.148640 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb84f01-f1a5-4197-bac4-b109344281a8" containerName="util" Nov 24 21:49:04 crc kubenswrapper[4767]: E1124 21:49:04.148664 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb84f01-f1a5-4197-bac4-b109344281a8" containerName="extract" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.148672 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb84f01-f1a5-4197-bac4-b109344281a8" containerName="extract" Nov 24 21:49:04 crc kubenswrapper[4767]: E1124 21:49:04.148687 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb84f01-f1a5-4197-bac4-b109344281a8" containerName="pull" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.148694 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb84f01-f1a5-4197-bac4-b109344281a8" containerName="pull" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.148827 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb84f01-f1a5-4197-bac4-b109344281a8" containerName="extract" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.149327 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.154425 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.154540 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.154589 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-n5nq6" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.167559 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.219445 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btts\" (UniqueName: \"kubernetes.io/projected/877151d2-38aa-421e-9335-dc8ef0f8dfc6-kube-api-access-4btts\") pod \"obo-prometheus-operator-668cf9dfbb-g49md\" (UID: \"877151d2-38aa-421e-9335-dc8ef0f8dfc6\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.271017 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.271919 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.274049 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.275380 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nts9p" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.282039 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.282850 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.295370 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.312057 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.320726 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/292b7555-3ea7-43a0-a123-d8c03d0181f4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9\" (UID: \"292b7555-3ea7-43a0-a123-d8c03d0181f4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.320828 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btts\" (UniqueName: \"kubernetes.io/projected/877151d2-38aa-421e-9335-dc8ef0f8dfc6-kube-api-access-4btts\") pod \"obo-prometheus-operator-668cf9dfbb-g49md\" (UID: \"877151d2-38aa-421e-9335-dc8ef0f8dfc6\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.320857 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/292b7555-3ea7-43a0-a123-d8c03d0181f4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9\" (UID: \"292b7555-3ea7-43a0-a123-d8c03d0181f4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.353143 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btts\" (UniqueName: \"kubernetes.io/projected/877151d2-38aa-421e-9335-dc8ef0f8dfc6-kube-api-access-4btts\") pod \"obo-prometheus-operator-668cf9dfbb-g49md\" (UID: \"877151d2-38aa-421e-9335-dc8ef0f8dfc6\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.421782 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf6f3541-d121-4bbe-8b0b-969a4c0031a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t\" (UID: \"cf6f3541-d121-4bbe-8b0b-969a4c0031a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.421922 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/292b7555-3ea7-43a0-a123-d8c03d0181f4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9\" (UID: \"292b7555-3ea7-43a0-a123-d8c03d0181f4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.421982 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf6f3541-d121-4bbe-8b0b-969a4c0031a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t\" (UID: \"cf6f3541-d121-4bbe-8b0b-969a4c0031a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.422043 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/292b7555-3ea7-43a0-a123-d8c03d0181f4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9\" (UID: \"292b7555-3ea7-43a0-a123-d8c03d0181f4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.428160 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/292b7555-3ea7-43a0-a123-d8c03d0181f4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9\" (UID: \"292b7555-3ea7-43a0-a123-d8c03d0181f4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.428575 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/292b7555-3ea7-43a0-a123-d8c03d0181f4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9\" (UID: \"292b7555-3ea7-43a0-a123-d8c03d0181f4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.469292 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.481628 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-8759g"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.482454 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.485109 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-cz2q9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.485749 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.524941 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf6f3541-d121-4bbe-8b0b-969a4c0031a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t\" (UID: \"cf6f3541-d121-4bbe-8b0b-969a4c0031a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.525021 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf6f3541-d121-4bbe-8b0b-969a4c0031a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t\" (UID: \"cf6f3541-d121-4bbe-8b0b-969a4c0031a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.528329 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf6f3541-d121-4bbe-8b0b-969a4c0031a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t\" (UID: \"cf6f3541-d121-4bbe-8b0b-969a4c0031a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.528793 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf6f3541-d121-4bbe-8b0b-969a4c0031a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t\" (UID: \"cf6f3541-d121-4bbe-8b0b-969a4c0031a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.545318 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-8759g"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.593664 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.606971 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.625850 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/04302788-c622-42ea-b5a6-eff1c0afd3ce-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-8759g\" (UID: \"04302788-c622-42ea-b5a6-eff1c0afd3ce\") " pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.626155 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27sf8\" (UniqueName: \"kubernetes.io/projected/04302788-c622-42ea-b5a6-eff1c0afd3ce-kube-api-access-27sf8\") pod \"observability-operator-d8bb48f5d-8759g\" (UID: \"04302788-c622-42ea-b5a6-eff1c0afd3ce\") " pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.687089 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wwcbf"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.687760 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.689763 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-ktb8v" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.701650 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wwcbf"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.727005 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/04302788-c622-42ea-b5a6-eff1c0afd3ce-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-8759g\" (UID: \"04302788-c622-42ea-b5a6-eff1c0afd3ce\") " pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.727053 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27sf8\" (UniqueName: \"kubernetes.io/projected/04302788-c622-42ea-b5a6-eff1c0afd3ce-kube-api-access-27sf8\") pod \"observability-operator-d8bb48f5d-8759g\" (UID: \"04302788-c622-42ea-b5a6-eff1c0afd3ce\") " pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.731957 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/04302788-c622-42ea-b5a6-eff1c0afd3ce-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-8759g\" (UID: \"04302788-c622-42ea-b5a6-eff1c0afd3ce\") " pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.749468 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27sf8\" (UniqueName: \"kubernetes.io/projected/04302788-c622-42ea-b5a6-eff1c0afd3ce-kube-api-access-27sf8\") pod \"observability-operator-d8bb48f5d-8759g\" (UID: \"04302788-c622-42ea-b5a6-eff1c0afd3ce\") " pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.756550 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md"] Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.828043 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/29641d4f-33cd-4116-a496-0767a54e5403-openshift-service-ca\") pod \"perses-operator-5446b9c989-wwcbf\" (UID: \"29641d4f-33cd-4116-a496-0767a54e5403\") " pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.828097 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx295\" (UniqueName: \"kubernetes.io/projected/29641d4f-33cd-4116-a496-0767a54e5403-kube-api-access-bx295\") pod \"perses-operator-5446b9c989-wwcbf\" (UID: \"29641d4f-33cd-4116-a496-0767a54e5403\") " pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.832926 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.860733 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9"] Nov 24 21:49:04 crc kubenswrapper[4767]: W1124 21:49:04.881085 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod292b7555_3ea7_43a0_a123_d8c03d0181f4.slice/crio-c5ec4716fb488e7d501014edea4f8c50c8dfd09f373458a2822ba152258df535 WatchSource:0}: Error finding container c5ec4716fb488e7d501014edea4f8c50c8dfd09f373458a2822ba152258df535: Status 404 returned error can't find the container with id c5ec4716fb488e7d501014edea4f8c50c8dfd09f373458a2822ba152258df535 Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.910351 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t"] Nov 24 21:49:04 crc kubenswrapper[4767]: W1124 21:49:04.924736 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf6f3541_d121_4bbe_8b0b_969a4c0031a6.slice/crio-b541fd7b3b543598803822e9aced1da985223d3c6bc68ad90ceaf0f7ef5c0868 WatchSource:0}: Error finding container b541fd7b3b543598803822e9aced1da985223d3c6bc68ad90ceaf0f7ef5c0868: Status 404 returned error can't find the container with id b541fd7b3b543598803822e9aced1da985223d3c6bc68ad90ceaf0f7ef5c0868 Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.928761 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/29641d4f-33cd-4116-a496-0767a54e5403-openshift-service-ca\") pod \"perses-operator-5446b9c989-wwcbf\" (UID: \"29641d4f-33cd-4116-a496-0767a54e5403\") " pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.928837 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx295\" (UniqueName: \"kubernetes.io/projected/29641d4f-33cd-4116-a496-0767a54e5403-kube-api-access-bx295\") pod \"perses-operator-5446b9c989-wwcbf\" (UID: \"29641d4f-33cd-4116-a496-0767a54e5403\") " pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.929672 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/29641d4f-33cd-4116-a496-0767a54e5403-openshift-service-ca\") pod \"perses-operator-5446b9c989-wwcbf\" (UID: \"29641d4f-33cd-4116-a496-0767a54e5403\") " pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:04 crc kubenswrapper[4767]: I1124 21:49:04.947752 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx295\" (UniqueName: \"kubernetes.io/projected/29641d4f-33cd-4116-a496-0767a54e5403-kube-api-access-bx295\") pod \"perses-operator-5446b9c989-wwcbf\" (UID: \"29641d4f-33cd-4116-a496-0767a54e5403\") " pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:05 crc kubenswrapper[4767]: I1124 21:49:05.015534 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:05 crc kubenswrapper[4767]: I1124 21:49:05.055628 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-8759g"] Nov 24 21:49:05 crc kubenswrapper[4767]: W1124 21:49:05.063055 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04302788_c622_42ea_b5a6_eff1c0afd3ce.slice/crio-5d551c6f3ba2dc30f5938cfc06cf15bb80a2e99605d4ed0c179421fc808aeaf1 WatchSource:0}: Error finding container 5d551c6f3ba2dc30f5938cfc06cf15bb80a2e99605d4ed0c179421fc808aeaf1: Status 404 returned error can't find the container with id 5d551c6f3ba2dc30f5938cfc06cf15bb80a2e99605d4ed0c179421fc808aeaf1 Nov 24 21:49:05 crc kubenswrapper[4767]: I1124 21:49:05.349578 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" event={"ID":"877151d2-38aa-421e-9335-dc8ef0f8dfc6","Type":"ContainerStarted","Data":"90517f992069949257047824fe4943a8ffd6655fd2707f1432580feabe3f6e64"} Nov 24 21:49:05 crc kubenswrapper[4767]: I1124 21:49:05.351096 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" event={"ID":"cf6f3541-d121-4bbe-8b0b-969a4c0031a6","Type":"ContainerStarted","Data":"b541fd7b3b543598803822e9aced1da985223d3c6bc68ad90ceaf0f7ef5c0868"} Nov 24 21:49:05 crc kubenswrapper[4767]: I1124 21:49:05.352321 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" event={"ID":"292b7555-3ea7-43a0-a123-d8c03d0181f4","Type":"ContainerStarted","Data":"c5ec4716fb488e7d501014edea4f8c50c8dfd09f373458a2822ba152258df535"} Nov 24 21:49:05 crc kubenswrapper[4767]: I1124 21:49:05.353462 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-8759g" event={"ID":"04302788-c622-42ea-b5a6-eff1c0afd3ce","Type":"ContainerStarted","Data":"5d551c6f3ba2dc30f5938cfc06cf15bb80a2e99605d4ed0c179421fc808aeaf1"} Nov 24 21:49:05 crc kubenswrapper[4767]: I1124 21:49:05.530607 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wwcbf"] Nov 24 21:49:05 crc kubenswrapper[4767]: W1124 21:49:05.538897 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29641d4f_33cd_4116_a496_0767a54e5403.slice/crio-1eece422bd9df32a01ef041ea185b8528115fdb4ff73cc7c32016fb05e559aef WatchSource:0}: Error finding container 1eece422bd9df32a01ef041ea185b8528115fdb4ff73cc7c32016fb05e559aef: Status 404 returned error can't find the container with id 1eece422bd9df32a01ef041ea185b8528115fdb4ff73cc7c32016fb05e559aef Nov 24 21:49:06 crc kubenswrapper[4767]: I1124 21:49:06.359525 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-wwcbf" event={"ID":"29641d4f-33cd-4116-a496-0767a54e5403","Type":"ContainerStarted","Data":"1eece422bd9df32a01ef041ea185b8528115fdb4ff73cc7c32016fb05e559aef"} Nov 24 21:49:20 crc kubenswrapper[4767]: E1124 21:49:20.434134 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 24 21:49:20 crc kubenswrapper[4767]: E1124 21:49:20.434881 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9_openshift-operators(292b7555-3ea7-43a0-a123-d8c03d0181f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:49:20 crc kubenswrapper[4767]: E1124 21:49:20.436827 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" podUID="292b7555-3ea7-43a0-a123-d8c03d0181f4" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.516021 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-8759g" event={"ID":"04302788-c622-42ea-b5a6-eff1c0afd3ce","Type":"ContainerStarted","Data":"891b323cc7179cbfcb89b63949cc69b5f7a75c2f2337b72fc418adf24a37e5eb"} Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.516463 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.518664 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" event={"ID":"877151d2-38aa-421e-9335-dc8ef0f8dfc6","Type":"ContainerStarted","Data":"0cbd7a0ce5b4222d81c1c82ab0debd68294356fe55700e18d8e2e1a1f8d63002"} Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.520436 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" event={"ID":"cf6f3541-d121-4bbe-8b0b-969a4c0031a6","Type":"ContainerStarted","Data":"87873813f89ec413391f9f5a021416a714ed8408b482b3728b6b994f4a063ef8"} Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.522548 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-wwcbf" event={"ID":"29641d4f-33cd-4116-a496-0767a54e5403","Type":"ContainerStarted","Data":"a2a25868c640c66b1c75fe8ed99baaccbdc36ad93af2fad91739a08ffef7cc9d"} Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.522702 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.524639 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" event={"ID":"292b7555-3ea7-43a0-a123-d8c03d0181f4","Type":"ContainerStarted","Data":"400205c501a01e6bb8878105705a9777ef8a3a55c022111c0cd97d000d62845c"} Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.548660 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-8759g" podStartSLOduration=2.042008743 podStartE2EDuration="17.548635702s" podCreationTimestamp="2025-11-24 21:49:04 +0000 UTC" firstStartedPulling="2025-11-24 21:49:05.070829832 +0000 UTC m=+627.987813204" lastFinishedPulling="2025-11-24 21:49:20.577456791 +0000 UTC m=+643.494440163" observedRunningTime="2025-11-24 21:49:21.542404111 +0000 UTC m=+644.459387523" watchObservedRunningTime="2025-11-24 21:49:21.548635702 +0000 UTC m=+644.465619074" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.568865 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t" podStartSLOduration=1.993987247 podStartE2EDuration="17.56884691s" podCreationTimestamp="2025-11-24 21:49:04 +0000 UTC" firstStartedPulling="2025-11-24 21:49:04.929363818 +0000 UTC m=+627.846347190" lastFinishedPulling="2025-11-24 21:49:20.504223481 +0000 UTC m=+643.421206853" observedRunningTime="2025-11-24 21:49:21.566188523 +0000 UTC m=+644.483171895" watchObservedRunningTime="2025-11-24 21:49:21.56884691 +0000 UTC m=+644.485830282" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.591041 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-g49md" podStartSLOduration=1.90332174 podStartE2EDuration="17.591015075s" podCreationTimestamp="2025-11-24 21:49:04 +0000 UTC" firstStartedPulling="2025-11-24 21:49:04.789850561 +0000 UTC m=+627.706833933" lastFinishedPulling="2025-11-24 21:49:20.477543896 +0000 UTC m=+643.394527268" observedRunningTime="2025-11-24 21:49:21.588657166 +0000 UTC m=+644.505640548" watchObservedRunningTime="2025-11-24 21:49:21.591015075 +0000 UTC m=+644.507998477" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.596505 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-8759g" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.639498 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9" podStartSLOduration=-9223372019.2153 podStartE2EDuration="17.639476794s" podCreationTimestamp="2025-11-24 21:49:04 +0000 UTC" firstStartedPulling="2025-11-24 21:49:04.886693477 +0000 UTC m=+627.803676849" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:49:21.636439805 +0000 UTC m=+644.553423177" watchObservedRunningTime="2025-11-24 21:49:21.639476794 +0000 UTC m=+644.556460166" Nov 24 21:49:21 crc kubenswrapper[4767]: I1124 21:49:21.639634 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-wwcbf" podStartSLOduration=2.6762234449999998 podStartE2EDuration="17.639624298s" podCreationTimestamp="2025-11-24 21:49:04 +0000 UTC" firstStartedPulling="2025-11-24 21:49:05.541517358 +0000 UTC m=+628.458500720" lastFinishedPulling="2025-11-24 21:49:20.504918201 +0000 UTC m=+643.421901573" observedRunningTime="2025-11-24 21:49:21.616433884 +0000 UTC m=+644.533417266" watchObservedRunningTime="2025-11-24 21:49:21.639624298 +0000 UTC m=+644.556607670" Nov 24 21:49:25 crc kubenswrapper[4767]: I1124 21:49:25.018118 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-wwcbf" Nov 24 21:49:43 crc kubenswrapper[4767]: I1124 21:49:43.960869 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6"] Nov 24 21:49:43 crc kubenswrapper[4767]: I1124 21:49:43.962261 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:43 crc kubenswrapper[4767]: I1124 21:49:43.963887 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:49:43 crc kubenswrapper[4767]: I1124 21:49:43.973098 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6"] Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.020443 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68dr\" (UniqueName: \"kubernetes.io/projected/e4522745-8479-4cf2-8703-03433a9be00e-kube-api-access-h68dr\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.020504 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.020542 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.121340 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.121429 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68dr\" (UniqueName: \"kubernetes.io/projected/e4522745-8479-4cf2-8703-03433a9be00e-kube-api-access-h68dr\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.121464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.121831 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.121861 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.140748 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68dr\" (UniqueName: \"kubernetes.io/projected/e4522745-8479-4cf2-8703-03433a9be00e-kube-api-access-h68dr\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.278544 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.516234 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6"] Nov 24 21:49:44 crc kubenswrapper[4767]: W1124 21:49:44.527572 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4522745_8479_4cf2_8703_03433a9be00e.slice/crio-736740a2190e17c0d485b81dca02bc0463c9ad9a35101a01cfac291679211c18 WatchSource:0}: Error finding container 736740a2190e17c0d485b81dca02bc0463c9ad9a35101a01cfac291679211c18: Status 404 returned error can't find the container with id 736740a2190e17c0d485b81dca02bc0463c9ad9a35101a01cfac291679211c18 Nov 24 21:49:44 crc kubenswrapper[4767]: I1124 21:49:44.649014 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" event={"ID":"e4522745-8479-4cf2-8703-03433a9be00e","Type":"ContainerStarted","Data":"736740a2190e17c0d485b81dca02bc0463c9ad9a35101a01cfac291679211c18"} Nov 24 21:49:45 crc kubenswrapper[4767]: I1124 21:49:45.659853 4767 generic.go:334] "Generic (PLEG): container finished" podID="e4522745-8479-4cf2-8703-03433a9be00e" containerID="e7b779e9ce3b9a3cbc272e1bbdc05e597f6e2e2b9c386f4f883c0dbb846d0197" exitCode=0 Nov 24 21:49:45 crc kubenswrapper[4767]: I1124 21:49:45.660147 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" event={"ID":"e4522745-8479-4cf2-8703-03433a9be00e","Type":"ContainerDied","Data":"e7b779e9ce3b9a3cbc272e1bbdc05e597f6e2e2b9c386f4f883c0dbb846d0197"} Nov 24 21:49:48 crc kubenswrapper[4767]: I1124 21:49:48.685466 4767 generic.go:334] "Generic (PLEG): container finished" podID="e4522745-8479-4cf2-8703-03433a9be00e" containerID="09742a62265295a6204c848862866c66b767db4d21f909193117ef9d22d0155a" exitCode=0 Nov 24 21:49:48 crc kubenswrapper[4767]: I1124 21:49:48.685899 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" event={"ID":"e4522745-8479-4cf2-8703-03433a9be00e","Type":"ContainerDied","Data":"09742a62265295a6204c848862866c66b767db4d21f909193117ef9d22d0155a"} Nov 24 21:49:49 crc kubenswrapper[4767]: I1124 21:49:49.695815 4767 generic.go:334] "Generic (PLEG): container finished" podID="e4522745-8479-4cf2-8703-03433a9be00e" containerID="93f940d833d6650786912f1ef2a119a461cf5e34a16f498e769d27ef2821ec7d" exitCode=0 Nov 24 21:49:49 crc kubenswrapper[4767]: I1124 21:49:49.695880 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" event={"ID":"e4522745-8479-4cf2-8703-03433a9be00e","Type":"ContainerDied","Data":"93f940d833d6650786912f1ef2a119a461cf5e34a16f498e769d27ef2821ec7d"} Nov 24 21:49:50 crc kubenswrapper[4767]: I1124 21:49:50.899438 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.015527 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-bundle\") pod \"e4522745-8479-4cf2-8703-03433a9be00e\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.015646 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-util\") pod \"e4522745-8479-4cf2-8703-03433a9be00e\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.015768 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h68dr\" (UniqueName: \"kubernetes.io/projected/e4522745-8479-4cf2-8703-03433a9be00e-kube-api-access-h68dr\") pod \"e4522745-8479-4cf2-8703-03433a9be00e\" (UID: \"e4522745-8479-4cf2-8703-03433a9be00e\") " Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.017121 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-bundle" (OuterVolumeSpecName: "bundle") pod "e4522745-8479-4cf2-8703-03433a9be00e" (UID: "e4522745-8479-4cf2-8703-03433a9be00e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.022462 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4522745-8479-4cf2-8703-03433a9be00e-kube-api-access-h68dr" (OuterVolumeSpecName: "kube-api-access-h68dr") pod "e4522745-8479-4cf2-8703-03433a9be00e" (UID: "e4522745-8479-4cf2-8703-03433a9be00e"). InnerVolumeSpecName "kube-api-access-h68dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.037867 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-util" (OuterVolumeSpecName: "util") pod "e4522745-8479-4cf2-8703-03433a9be00e" (UID: "e4522745-8479-4cf2-8703-03433a9be00e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.117499 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.117932 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4522745-8479-4cf2-8703-03433a9be00e-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.118190 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h68dr\" (UniqueName: \"kubernetes.io/projected/e4522745-8479-4cf2-8703-03433a9be00e-kube-api-access-h68dr\") on node \"crc\" DevicePath \"\"" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.709498 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" event={"ID":"e4522745-8479-4cf2-8703-03433a9be00e","Type":"ContainerDied","Data":"736740a2190e17c0d485b81dca02bc0463c9ad9a35101a01cfac291679211c18"} Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.709545 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="736740a2190e17c0d485b81dca02bc0463c9ad9a35101a01cfac291679211c18" Nov 24 21:49:51 crc kubenswrapper[4767]: I1124 21:49:51.709554 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.114745 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-gsqpw"] Nov 24 21:49:54 crc kubenswrapper[4767]: E1124 21:49:54.115203 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4522745-8479-4cf2-8703-03433a9be00e" containerName="util" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.115237 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4522745-8479-4cf2-8703-03433a9be00e" containerName="util" Nov 24 21:49:54 crc kubenswrapper[4767]: E1124 21:49:54.115260 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4522745-8479-4cf2-8703-03433a9be00e" containerName="extract" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.115315 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4522745-8479-4cf2-8703-03433a9be00e" containerName="extract" Nov 24 21:49:54 crc kubenswrapper[4767]: E1124 21:49:54.115357 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4522745-8479-4cf2-8703-03433a9be00e" containerName="pull" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.115378 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4522745-8479-4cf2-8703-03433a9be00e" containerName="pull" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.115619 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4522745-8479-4cf2-8703-03433a9be00e" containerName="extract" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.117131 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.121225 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.121476 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.121693 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ndxpx" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.125052 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-gsqpw"] Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.257917 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4hs\" (UniqueName: \"kubernetes.io/projected/7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc-kube-api-access-xk4hs\") pod \"nmstate-operator-557fdffb88-gsqpw\" (UID: \"7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.359875 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4hs\" (UniqueName: \"kubernetes.io/projected/7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc-kube-api-access-xk4hs\") pod \"nmstate-operator-557fdffb88-gsqpw\" (UID: \"7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.381396 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4hs\" (UniqueName: \"kubernetes.io/projected/7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc-kube-api-access-xk4hs\") pod \"nmstate-operator-557fdffb88-gsqpw\" (UID: \"7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.440595 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" Nov 24 21:49:54 crc kubenswrapper[4767]: I1124 21:49:54.895697 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-gsqpw"] Nov 24 21:49:55 crc kubenswrapper[4767]: I1124 21:49:55.742095 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" event={"ID":"7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc","Type":"ContainerStarted","Data":"542fa8a86cf6c9c9b816c5354dd75f56b08d13dc66673ee8f12e75f481c9068b"} Nov 24 21:49:57 crc kubenswrapper[4767]: I1124 21:49:57.757458 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" event={"ID":"7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc","Type":"ContainerStarted","Data":"7f6656c2b68ccb64f5baf39ba3fcc80a4087b4b3474d8089c1ae76ae66e6075e"} Nov 24 21:49:57 crc kubenswrapper[4767]: I1124 21:49:57.784923 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-gsqpw" podStartSLOduration=1.255212342 podStartE2EDuration="3.784900823s" podCreationTimestamp="2025-11-24 21:49:54 +0000 UTC" firstStartedPulling="2025-11-24 21:49:54.912909608 +0000 UTC m=+677.829892980" lastFinishedPulling="2025-11-24 21:49:57.442598089 +0000 UTC m=+680.359581461" observedRunningTime="2025-11-24 21:49:57.780450224 +0000 UTC m=+680.697433626" watchObservedRunningTime="2025-11-24 21:49:57.784900823 +0000 UTC m=+680.701884195" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.043587 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.045162 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.048678 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dhggq" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.053453 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.054613 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.056432 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.057722 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.075310 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.080661 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tn5df"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.081647 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.103092 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pglq7\" (UniqueName: \"kubernetes.io/projected/03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14-kube-api-access-pglq7\") pod \"nmstate-metrics-5dcf9c57c5-vc96f\" (UID: \"03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.204561 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-nmstate-lock\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.204627 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pglq7\" (UniqueName: \"kubernetes.io/projected/03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14-kube-api-access-pglq7\") pod \"nmstate-metrics-5dcf9c57c5-vc96f\" (UID: \"03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.204656 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/949ff973-0dba-43c3-9797-a11b5df07b78-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-vw4tc\" (UID: \"949ff973-0dba-43c3-9797-a11b5df07b78\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.204691 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-ovs-socket\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.204744 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7gn\" (UniqueName: \"kubernetes.io/projected/ec2443e8-31e2-462e-8228-20b836a0293b-kube-api-access-gc7gn\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.204799 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-dbus-socket\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.204831 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xffsk\" (UniqueName: \"kubernetes.io/projected/949ff973-0dba-43c3-9797-a11b5df07b78-kube-api-access-xffsk\") pod \"nmstate-webhook-6b89b748d8-vw4tc\" (UID: \"949ff973-0dba-43c3-9797-a11b5df07b78\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.217422 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.218101 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.221012 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.221748 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-m6h52" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.227697 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.227819 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.244870 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pglq7\" (UniqueName: \"kubernetes.io/projected/03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14-kube-api-access-pglq7\") pod \"nmstate-metrics-5dcf9c57c5-vc96f\" (UID: \"03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306101 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7gn\" (UniqueName: \"kubernetes.io/projected/ec2443e8-31e2-462e-8228-20b836a0293b-kube-api-access-gc7gn\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306170 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-dbus-socket\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306211 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqhk\" (UniqueName: \"kubernetes.io/projected/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-kube-api-access-nbqhk\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306247 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xffsk\" (UniqueName: \"kubernetes.io/projected/949ff973-0dba-43c3-9797-a11b5df07b78-kube-api-access-xffsk\") pod \"nmstate-webhook-6b89b748d8-vw4tc\" (UID: \"949ff973-0dba-43c3-9797-a11b5df07b78\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306337 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-nmstate-lock\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306369 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306392 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/949ff973-0dba-43c3-9797-a11b5df07b78-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-vw4tc\" (UID: \"949ff973-0dba-43c3-9797-a11b5df07b78\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306419 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306433 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-nmstate-lock\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306476 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-ovs-socket\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306447 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-ovs-socket\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.306492 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec2443e8-31e2-462e-8228-20b836a0293b-dbus-socket\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: E1124 21:50:05.306565 4767 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 24 21:50:05 crc kubenswrapper[4767]: E1124 21:50:05.306695 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949ff973-0dba-43c3-9797-a11b5df07b78-tls-key-pair podName:949ff973-0dba-43c3-9797-a11b5df07b78 nodeName:}" failed. No retries permitted until 2025-11-24 21:50:05.806671949 +0000 UTC m=+688.723655321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/949ff973-0dba-43c3-9797-a11b5df07b78-tls-key-pair") pod "nmstate-webhook-6b89b748d8-vw4tc" (UID: "949ff973-0dba-43c3-9797-a11b5df07b78") : secret "openshift-nmstate-webhook" not found Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.330009 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7gn\" (UniqueName: \"kubernetes.io/projected/ec2443e8-31e2-462e-8228-20b836a0293b-kube-api-access-gc7gn\") pod \"nmstate-handler-tn5df\" (UID: \"ec2443e8-31e2-462e-8228-20b836a0293b\") " pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.342152 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xffsk\" (UniqueName: \"kubernetes.io/projected/949ff973-0dba-43c3-9797-a11b5df07b78-kube-api-access-xffsk\") pod \"nmstate-webhook-6b89b748d8-vw4tc\" (UID: \"949ff973-0dba-43c3-9797-a11b5df07b78\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.375552 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.406097 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.407676 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.407795 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.407954 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqhk\" (UniqueName: \"kubernetes.io/projected/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-kube-api-access-nbqhk\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: E1124 21:50:05.409641 4767 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 24 21:50:05 crc kubenswrapper[4767]: E1124 21:50:05.409703 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-plugin-serving-cert podName:1b33b046-047a-4fb3-a8f7-5878cb5b67a4 nodeName:}" failed. No retries permitted until 2025-11-24 21:50:05.909684205 +0000 UTC m=+688.826667577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-4j6q6" (UID: "1b33b046-047a-4fb3-a8f7-5878cb5b67a4") : secret "plugin-serving-cert" not found Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.410748 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.437701 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqhk\" (UniqueName: \"kubernetes.io/projected/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-kube-api-access-nbqhk\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.438047 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6969745747-2rkwk"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.443708 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.455414 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6969745747-2rkwk"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.484688 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.484994 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.508633 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c094a3d1-28a3-4b88-92fc-82026d73640d-console-oauth-config\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.508692 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-service-ca\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.508720 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfl2\" (UniqueName: \"kubernetes.io/projected/c094a3d1-28a3-4b88-92fc-82026d73640d-kube-api-access-fhfl2\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.508738 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-oauth-serving-cert\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.508768 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c094a3d1-28a3-4b88-92fc-82026d73640d-console-serving-cert\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.508902 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-console-config\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.508955 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-trusted-ca-bundle\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.610926 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhfl2\" (UniqueName: \"kubernetes.io/projected/c094a3d1-28a3-4b88-92fc-82026d73640d-kube-api-access-fhfl2\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.610974 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-oauth-serving-cert\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.611005 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c094a3d1-28a3-4b88-92fc-82026d73640d-console-serving-cert\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.611031 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-console-config\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.611049 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-trusted-ca-bundle\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.611089 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c094a3d1-28a3-4b88-92fc-82026d73640d-console-oauth-config\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.611129 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-service-ca\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.612383 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-console-config\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.612429 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-oauth-serving-cert\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.613404 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-trusted-ca-bundle\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.614094 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c094a3d1-28a3-4b88-92fc-82026d73640d-service-ca\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.617010 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c094a3d1-28a3-4b88-92fc-82026d73640d-console-serving-cert\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.618981 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c094a3d1-28a3-4b88-92fc-82026d73640d-console-oauth-config\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.627944 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhfl2\" (UniqueName: \"kubernetes.io/projected/c094a3d1-28a3-4b88-92fc-82026d73640d-kube-api-access-fhfl2\") pod \"console-6969745747-2rkwk\" (UID: \"c094a3d1-28a3-4b88-92fc-82026d73640d\") " pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.633613 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f"] Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.769457 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.814116 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tn5df" event={"ID":"ec2443e8-31e2-462e-8228-20b836a0293b","Type":"ContainerStarted","Data":"e78c32112587d5f29a185762f0a65eba1f7afdd151b02cff2cdb7da5d29b198c"} Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.814138 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/949ff973-0dba-43c3-9797-a11b5df07b78-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-vw4tc\" (UID: \"949ff973-0dba-43c3-9797-a11b5df07b78\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.815879 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" event={"ID":"03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14","Type":"ContainerStarted","Data":"e200bb18afda1f2431da42aa144769e509e5a1f8968872a65068333cd0ad1e68"} Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.820058 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/949ff973-0dba-43c3-9797-a11b5df07b78-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-vw4tc\" (UID: \"949ff973-0dba-43c3-9797-a11b5df07b78\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.915422 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.918822 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b33b046-047a-4fb3-a8f7-5878cb5b67a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4j6q6\" (UID: \"1b33b046-047a-4fb3-a8f7-5878cb5b67a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:05 crc kubenswrapper[4767]: I1124 21:50:05.984061 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.001625 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6969745747-2rkwk"] Nov 24 21:50:06 crc kubenswrapper[4767]: W1124 21:50:06.011634 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc094a3d1_28a3_4b88_92fc_82026d73640d.slice/crio-e9a66f624ea3146150893d6f472d760e6a3549af232ef3f2770f11dfa78ac017 WatchSource:0}: Error finding container e9a66f624ea3146150893d6f472d760e6a3549af232ef3f2770f11dfa78ac017: Status 404 returned error can't find the container with id e9a66f624ea3146150893d6f472d760e6a3549af232ef3f2770f11dfa78ac017 Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.130903 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.226532 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc"] Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.362715 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6"] Nov 24 21:50:06 crc kubenswrapper[4767]: W1124 21:50:06.368896 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b33b046_047a_4fb3_a8f7_5878cb5b67a4.slice/crio-e5b2ccbb187f8952a1a3c0959a9a6bd2077d96797a4eaaa0b5bf6b81507d93fd WatchSource:0}: Error finding container e5b2ccbb187f8952a1a3c0959a9a6bd2077d96797a4eaaa0b5bf6b81507d93fd: Status 404 returned error can't find the container with id e5b2ccbb187f8952a1a3c0959a9a6bd2077d96797a4eaaa0b5bf6b81507d93fd Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.824749 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" event={"ID":"949ff973-0dba-43c3-9797-a11b5df07b78","Type":"ContainerStarted","Data":"e92215ffd471e98355fb7fc24bd7c819d46f68d637d79edf3b0df9db3ce1a247"} Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.827424 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6969745747-2rkwk" event={"ID":"c094a3d1-28a3-4b88-92fc-82026d73640d","Type":"ContainerStarted","Data":"cc5e468fb80b86035562d33398686b584c2ebdf94c958c910f1174759c2b562f"} Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.827456 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6969745747-2rkwk" event={"ID":"c094a3d1-28a3-4b88-92fc-82026d73640d","Type":"ContainerStarted","Data":"e9a66f624ea3146150893d6f472d760e6a3549af232ef3f2770f11dfa78ac017"} Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.829164 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" event={"ID":"1b33b046-047a-4fb3-a8f7-5878cb5b67a4","Type":"ContainerStarted","Data":"e5b2ccbb187f8952a1a3c0959a9a6bd2077d96797a4eaaa0b5bf6b81507d93fd"} Nov 24 21:50:06 crc kubenswrapper[4767]: I1124 21:50:06.859529 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6969745747-2rkwk" podStartSLOduration=1.8595065640000001 podStartE2EDuration="1.859506564s" podCreationTimestamp="2025-11-24 21:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:50:06.854630352 +0000 UTC m=+689.771613774" watchObservedRunningTime="2025-11-24 21:50:06.859506564 +0000 UTC m=+689.776489946" Nov 24 21:50:08 crc kubenswrapper[4767]: I1124 21:50:08.842379 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" event={"ID":"03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14","Type":"ContainerStarted","Data":"5927d946142e7f96c42b81b9e66ed5d1428939e0ceb08330b448c1b356336b75"} Nov 24 21:50:08 crc kubenswrapper[4767]: I1124 21:50:08.846097 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" event={"ID":"949ff973-0dba-43c3-9797-a11b5df07b78","Type":"ContainerStarted","Data":"83591bfd0ebfebec2820fe5aa930768cab2db4dcd99c1a9b2438bad5451f0475"} Nov 24 21:50:08 crc kubenswrapper[4767]: I1124 21:50:08.847071 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:08 crc kubenswrapper[4767]: I1124 21:50:08.848603 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tn5df" event={"ID":"ec2443e8-31e2-462e-8228-20b836a0293b","Type":"ContainerStarted","Data":"dd111a36c74eb59b21acded3c1ada11e299364283f42f65fed64d85b6c209768"} Nov 24 21:50:08 crc kubenswrapper[4767]: I1124 21:50:08.848950 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:08 crc kubenswrapper[4767]: I1124 21:50:08.891092 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" podStartSLOduration=2.035307376 podStartE2EDuration="3.891070101s" podCreationTimestamp="2025-11-24 21:50:05 +0000 UTC" firstStartedPulling="2025-11-24 21:50:06.244193201 +0000 UTC m=+689.161176573" lastFinishedPulling="2025-11-24 21:50:08.099955926 +0000 UTC m=+691.016939298" observedRunningTime="2025-11-24 21:50:08.86903285 +0000 UTC m=+691.786016242" watchObservedRunningTime="2025-11-24 21:50:08.891070101 +0000 UTC m=+691.808053493" Nov 24 21:50:08 crc kubenswrapper[4767]: I1124 21:50:08.893133 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tn5df" podStartSLOduration=1.250852265 podStartE2EDuration="3.89312303s" podCreationTimestamp="2025-11-24 21:50:05 +0000 UTC" firstStartedPulling="2025-11-24 21:50:05.447792073 +0000 UTC m=+688.364775445" lastFinishedPulling="2025-11-24 21:50:08.090062818 +0000 UTC m=+691.007046210" observedRunningTime="2025-11-24 21:50:08.887924229 +0000 UTC m=+691.804907601" watchObservedRunningTime="2025-11-24 21:50:08.89312303 +0000 UTC m=+691.810106402" Nov 24 21:50:09 crc kubenswrapper[4767]: I1124 21:50:09.863608 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" event={"ID":"1b33b046-047a-4fb3-a8f7-5878cb5b67a4","Type":"ContainerStarted","Data":"b6ac409b0039dca33a7a276044674acb82636f4b38a9a220b4ef7016c749014b"} Nov 24 21:50:09 crc kubenswrapper[4767]: I1124 21:50:09.890450 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4j6q6" podStartSLOduration=2.123479179 podStartE2EDuration="4.890342138s" podCreationTimestamp="2025-11-24 21:50:05 +0000 UTC" firstStartedPulling="2025-11-24 21:50:06.370923746 +0000 UTC m=+689.287907118" lastFinishedPulling="2025-11-24 21:50:09.137786705 +0000 UTC m=+692.054770077" observedRunningTime="2025-11-24 21:50:09.881001896 +0000 UTC m=+692.797985268" watchObservedRunningTime="2025-11-24 21:50:09.890342138 +0000 UTC m=+692.807325520" Nov 24 21:50:10 crc kubenswrapper[4767]: I1124 21:50:10.870696 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" event={"ID":"03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14","Type":"ContainerStarted","Data":"b8455503cde79506463470939c03319199f196dc98ac727e4577a0cb6151463c"} Nov 24 21:50:10 crc kubenswrapper[4767]: I1124 21:50:10.906966 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vc96f" podStartSLOduration=0.915796142 podStartE2EDuration="5.90694099s" podCreationTimestamp="2025-11-24 21:50:05 +0000 UTC" firstStartedPulling="2025-11-24 21:50:05.639670973 +0000 UTC m=+688.556654345" lastFinishedPulling="2025-11-24 21:50:10.630815821 +0000 UTC m=+693.547799193" observedRunningTime="2025-11-24 21:50:10.898198686 +0000 UTC m=+693.815182088" watchObservedRunningTime="2025-11-24 21:50:10.90694099 +0000 UTC m=+693.823924402" Nov 24 21:50:15 crc kubenswrapper[4767]: I1124 21:50:15.538880 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tn5df" Nov 24 21:50:15 crc kubenswrapper[4767]: I1124 21:50:15.770062 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:15 crc kubenswrapper[4767]: I1124 21:50:15.770129 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:15 crc kubenswrapper[4767]: I1124 21:50:15.786578 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:15 crc kubenswrapper[4767]: I1124 21:50:15.912729 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6969745747-2rkwk" Nov 24 21:50:15 crc kubenswrapper[4767]: I1124 21:50:15.972849 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mp4ng"] Nov 24 21:50:25 crc kubenswrapper[4767]: I1124 21:50:25.993955 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-vw4tc" Nov 24 21:50:35 crc kubenswrapper[4767]: I1124 21:50:35.481497 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:50:35 crc kubenswrapper[4767]: I1124 21:50:35.482500 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.784513 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff"] Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.787035 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.790079 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.793585 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff"] Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.859785 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8gz\" (UniqueName: \"kubernetes.io/projected/585faaa9-4163-4066-b609-77274cc5a207-kube-api-access-bm8gz\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.859908 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.859956 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.960813 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.961445 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.961710 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm8gz\" (UniqueName: \"kubernetes.io/projected/585faaa9-4163-4066-b609-77274cc5a207-kube-api-access-bm8gz\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.961893 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.961442 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:39 crc kubenswrapper[4767]: I1124 21:50:39.979859 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm8gz\" (UniqueName: \"kubernetes.io/projected/585faaa9-4163-4066-b609-77274cc5a207-kube-api-access-bm8gz\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:40 crc kubenswrapper[4767]: I1124 21:50:40.101109 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:40 crc kubenswrapper[4767]: I1124 21:50:40.297434 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff"] Nov 24 21:50:40 crc kubenswrapper[4767]: W1124 21:50:40.307588 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod585faaa9_4163_4066_b609_77274cc5a207.slice/crio-cf0f7ab83a6d8a8bf44d80ef1177eee6364f85152288f2552fef5722cf4d6515 WatchSource:0}: Error finding container cf0f7ab83a6d8a8bf44d80ef1177eee6364f85152288f2552fef5722cf4d6515: Status 404 returned error can't find the container with id cf0f7ab83a6d8a8bf44d80ef1177eee6364f85152288f2552fef5722cf4d6515 Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.021758 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-mp4ng" podUID="86bad83e-cde9-43a8-803a-fda0e14ef559" containerName="console" containerID="cri-o://c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef" gracePeriod=15 Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.098828 4767 generic.go:334] "Generic (PLEG): container finished" podID="585faaa9-4163-4066-b609-77274cc5a207" containerID="8f0f0b250c5f4ad2272ba48947cf17f4509c5d52eb9bc6d7f1420507893659b8" exitCode=0 Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.098881 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" event={"ID":"585faaa9-4163-4066-b609-77274cc5a207","Type":"ContainerDied","Data":"8f0f0b250c5f4ad2272ba48947cf17f4509c5d52eb9bc6d7f1420507893659b8"} Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.098911 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" event={"ID":"585faaa9-4163-4066-b609-77274cc5a207","Type":"ContainerStarted","Data":"cf0f7ab83a6d8a8bf44d80ef1177eee6364f85152288f2552fef5722cf4d6515"} Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.396751 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mp4ng_86bad83e-cde9-43a8-803a-fda0e14ef559/console/0.log" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.396849 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.483214 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-console-config\") pod \"86bad83e-cde9-43a8-803a-fda0e14ef559\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.483338 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-trusted-ca-bundle\") pod \"86bad83e-cde9-43a8-803a-fda0e14ef559\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.483361 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp945\" (UniqueName: \"kubernetes.io/projected/86bad83e-cde9-43a8-803a-fda0e14ef559-kube-api-access-hp945\") pod \"86bad83e-cde9-43a8-803a-fda0e14ef559\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.483375 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-service-ca\") pod \"86bad83e-cde9-43a8-803a-fda0e14ef559\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.483393 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-oauth-config\") pod \"86bad83e-cde9-43a8-803a-fda0e14ef559\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.483425 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-serving-cert\") pod \"86bad83e-cde9-43a8-803a-fda0e14ef559\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.483496 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-oauth-serving-cert\") pod \"86bad83e-cde9-43a8-803a-fda0e14ef559\" (UID: \"86bad83e-cde9-43a8-803a-fda0e14ef559\") " Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.484460 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-service-ca" (OuterVolumeSpecName: "service-ca") pod "86bad83e-cde9-43a8-803a-fda0e14ef559" (UID: "86bad83e-cde9-43a8-803a-fda0e14ef559"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.484575 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "86bad83e-cde9-43a8-803a-fda0e14ef559" (UID: "86bad83e-cde9-43a8-803a-fda0e14ef559"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.484612 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "86bad83e-cde9-43a8-803a-fda0e14ef559" (UID: "86bad83e-cde9-43a8-803a-fda0e14ef559"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.484730 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-console-config" (OuterVolumeSpecName: "console-config") pod "86bad83e-cde9-43a8-803a-fda0e14ef559" (UID: "86bad83e-cde9-43a8-803a-fda0e14ef559"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.490906 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "86bad83e-cde9-43a8-803a-fda0e14ef559" (UID: "86bad83e-cde9-43a8-803a-fda0e14ef559"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.492768 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "86bad83e-cde9-43a8-803a-fda0e14ef559" (UID: "86bad83e-cde9-43a8-803a-fda0e14ef559"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.493580 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bad83e-cde9-43a8-803a-fda0e14ef559-kube-api-access-hp945" (OuterVolumeSpecName: "kube-api-access-hp945") pod "86bad83e-cde9-43a8-803a-fda0e14ef559" (UID: "86bad83e-cde9-43a8-803a-fda0e14ef559"). InnerVolumeSpecName "kube-api-access-hp945". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.584558 4767 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.584592 4767 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.584604 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.584614 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp945\" (UniqueName: \"kubernetes.io/projected/86bad83e-cde9-43a8-803a-fda0e14ef559-kube-api-access-hp945\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.584625 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86bad83e-cde9-43a8-803a-fda0e14ef559-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.584633 4767 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:41 crc kubenswrapper[4767]: I1124 21:50:41.584641 4767 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86bad83e-cde9-43a8-803a-fda0e14ef559-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.108832 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mp4ng_86bad83e-cde9-43a8-803a-fda0e14ef559/console/0.log" Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.109251 4767 generic.go:334] "Generic (PLEG): container finished" podID="86bad83e-cde9-43a8-803a-fda0e14ef559" containerID="c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef" exitCode=2 Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.109329 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mp4ng" event={"ID":"86bad83e-cde9-43a8-803a-fda0e14ef559","Type":"ContainerDied","Data":"c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef"} Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.109371 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mp4ng" event={"ID":"86bad83e-cde9-43a8-803a-fda0e14ef559","Type":"ContainerDied","Data":"6eb36b608bfc487f14d967db87f59a5851c3239d91d579c4e1b5f81175f9df33"} Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.109402 4767 scope.go:117] "RemoveContainer" containerID="c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef" Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.109403 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mp4ng" Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.144440 4767 scope.go:117] "RemoveContainer" containerID="c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef" Nov 24 21:50:42 crc kubenswrapper[4767]: E1124 21:50:42.145141 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef\": container with ID starting with c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef not found: ID does not exist" containerID="c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef" Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.145233 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef"} err="failed to get container status \"c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef\": rpc error: code = NotFound desc = could not find container \"c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef\": container with ID starting with c0b2c6ede74c38fc572194813033c3f562bd1c487c0f5cd09b4768fd4a10d1ef not found: ID does not exist" Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.172418 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mp4ng"] Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.174123 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-mp4ng"] Nov 24 21:50:42 crc kubenswrapper[4767]: I1124 21:50:42.322465 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86bad83e-cde9-43a8-803a-fda0e14ef559" path="/var/lib/kubelet/pods/86bad83e-cde9-43a8-803a-fda0e14ef559/volumes" Nov 24 21:50:43 crc kubenswrapper[4767]: I1124 21:50:43.117984 4767 generic.go:334] "Generic (PLEG): container finished" podID="585faaa9-4163-4066-b609-77274cc5a207" containerID="7d0d5705a0c772a7fd59d67f298d627dba16038b76935bc3ee6188d55a02c321" exitCode=0 Nov 24 21:50:43 crc kubenswrapper[4767]: I1124 21:50:43.118056 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" event={"ID":"585faaa9-4163-4066-b609-77274cc5a207","Type":"ContainerDied","Data":"7d0d5705a0c772a7fd59d67f298d627dba16038b76935bc3ee6188d55a02c321"} Nov 24 21:50:44 crc kubenswrapper[4767]: I1124 21:50:44.130991 4767 generic.go:334] "Generic (PLEG): container finished" podID="585faaa9-4163-4066-b609-77274cc5a207" containerID="42eea46119f689724e5433c7b4cd25226dc935fec969df2efa9e73a5bd04bf5b" exitCode=0 Nov 24 21:50:44 crc kubenswrapper[4767]: I1124 21:50:44.131137 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" event={"ID":"585faaa9-4163-4066-b609-77274cc5a207","Type":"ContainerDied","Data":"42eea46119f689724e5433c7b4cd25226dc935fec969df2efa9e73a5bd04bf5b"} Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.404669 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.431979 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-util\") pod \"585faaa9-4163-4066-b609-77274cc5a207\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.432051 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-bundle\") pod \"585faaa9-4163-4066-b609-77274cc5a207\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.432114 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm8gz\" (UniqueName: \"kubernetes.io/projected/585faaa9-4163-4066-b609-77274cc5a207-kube-api-access-bm8gz\") pod \"585faaa9-4163-4066-b609-77274cc5a207\" (UID: \"585faaa9-4163-4066-b609-77274cc5a207\") " Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.435590 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-bundle" (OuterVolumeSpecName: "bundle") pod "585faaa9-4163-4066-b609-77274cc5a207" (UID: "585faaa9-4163-4066-b609-77274cc5a207"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.440872 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585faaa9-4163-4066-b609-77274cc5a207-kube-api-access-bm8gz" (OuterVolumeSpecName: "kube-api-access-bm8gz") pod "585faaa9-4163-4066-b609-77274cc5a207" (UID: "585faaa9-4163-4066-b609-77274cc5a207"). InnerVolumeSpecName "kube-api-access-bm8gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.456539 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-util" (OuterVolumeSpecName: "util") pod "585faaa9-4163-4066-b609-77274cc5a207" (UID: "585faaa9-4163-4066-b609-77274cc5a207"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.533799 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.533841 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585faaa9-4163-4066-b609-77274cc5a207-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:45 crc kubenswrapper[4767]: I1124 21:50:45.533861 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm8gz\" (UniqueName: \"kubernetes.io/projected/585faaa9-4163-4066-b609-77274cc5a207-kube-api-access-bm8gz\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:46 crc kubenswrapper[4767]: I1124 21:50:46.147795 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" event={"ID":"585faaa9-4163-4066-b609-77274cc5a207","Type":"ContainerDied","Data":"cf0f7ab83a6d8a8bf44d80ef1177eee6364f85152288f2552fef5722cf4d6515"} Nov 24 21:50:46 crc kubenswrapper[4767]: I1124 21:50:46.147841 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf0f7ab83a6d8a8bf44d80ef1177eee6364f85152288f2552fef5722cf4d6515" Nov 24 21:50:46 crc kubenswrapper[4767]: I1124 21:50:46.148412 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.894572 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l"] Nov 24 21:50:54 crc kubenswrapper[4767]: E1124 21:50:54.895469 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585faaa9-4163-4066-b609-77274cc5a207" containerName="util" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.895485 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="585faaa9-4163-4066-b609-77274cc5a207" containerName="util" Nov 24 21:50:54 crc kubenswrapper[4767]: E1124 21:50:54.895503 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585faaa9-4163-4066-b609-77274cc5a207" containerName="pull" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.895510 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="585faaa9-4163-4066-b609-77274cc5a207" containerName="pull" Nov 24 21:50:54 crc kubenswrapper[4767]: E1124 21:50:54.895521 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bad83e-cde9-43a8-803a-fda0e14ef559" containerName="console" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.895533 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bad83e-cde9-43a8-803a-fda0e14ef559" containerName="console" Nov 24 21:50:54 crc kubenswrapper[4767]: E1124 21:50:54.895558 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585faaa9-4163-4066-b609-77274cc5a207" containerName="extract" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.895566 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="585faaa9-4163-4066-b609-77274cc5a207" containerName="extract" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.895707 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="585faaa9-4163-4066-b609-77274cc5a207" containerName="extract" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.895730 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bad83e-cde9-43a8-803a-fda0e14ef559" containerName="console" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.896196 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.899213 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.899235 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.899429 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.899526 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5x22l" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.899730 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.908656 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l"] Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.977828 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/731cc1e2-6b05-450a-b193-7642ea4674ba-webhook-cert\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.977878 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qh8\" (UniqueName: \"kubernetes.io/projected/731cc1e2-6b05-450a-b193-7642ea4674ba-kube-api-access-67qh8\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:54 crc kubenswrapper[4767]: I1124 21:50:54.977903 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/731cc1e2-6b05-450a-b193-7642ea4674ba-apiservice-cert\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.079110 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/731cc1e2-6b05-450a-b193-7642ea4674ba-apiservice-cert\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.079235 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/731cc1e2-6b05-450a-b193-7642ea4674ba-webhook-cert\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.079289 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qh8\" (UniqueName: \"kubernetes.io/projected/731cc1e2-6b05-450a-b193-7642ea4674ba-kube-api-access-67qh8\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.084869 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/731cc1e2-6b05-450a-b193-7642ea4674ba-apiservice-cert\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.085281 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/731cc1e2-6b05-450a-b193-7642ea4674ba-webhook-cert\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.112010 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qh8\" (UniqueName: \"kubernetes.io/projected/731cc1e2-6b05-450a-b193-7642ea4674ba-kube-api-access-67qh8\") pod \"metallb-operator-controller-manager-7754dcd9b8-4f27l\" (UID: \"731cc1e2-6b05-450a-b193-7642ea4674ba\") " pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.210449 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x"] Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.211334 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.215867 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.216779 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qn6lk" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.218346 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.219887 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.234209 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x"] Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.282649 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-webhook-cert\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.282734 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-apiservice-cert\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.282758 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4vk2\" (UniqueName: \"kubernetes.io/projected/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-kube-api-access-x4vk2\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.383741 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-webhook-cert\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.383849 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-apiservice-cert\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.383877 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4vk2\" (UniqueName: \"kubernetes.io/projected/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-kube-api-access-x4vk2\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.388567 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-webhook-cert\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.388599 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-apiservice-cert\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.400429 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4vk2\" (UniqueName: \"kubernetes.io/projected/ef716a61-b638-498b-b9b0-46ce4d9b2a4b-kube-api-access-x4vk2\") pod \"metallb-operator-webhook-server-549d689cb8-wpm9x\" (UID: \"ef716a61-b638-498b-b9b0-46ce4d9b2a4b\") " pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.533400 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.565787 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l"] Nov 24 21:50:55 crc kubenswrapper[4767]: I1124 21:50:55.960812 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x"] Nov 24 21:50:55 crc kubenswrapper[4767]: W1124 21:50:55.974642 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef716a61_b638_498b_b9b0_46ce4d9b2a4b.slice/crio-db3c204a2fe6f933aab39c3f7ad3a734e4de6794c6d464a245c167d72be93173 WatchSource:0}: Error finding container db3c204a2fe6f933aab39c3f7ad3a734e4de6794c6d464a245c167d72be93173: Status 404 returned error can't find the container with id db3c204a2fe6f933aab39c3f7ad3a734e4de6794c6d464a245c167d72be93173 Nov 24 21:50:56 crc kubenswrapper[4767]: I1124 21:50:56.216502 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" event={"ID":"ef716a61-b638-498b-b9b0-46ce4d9b2a4b","Type":"ContainerStarted","Data":"db3c204a2fe6f933aab39c3f7ad3a734e4de6794c6d464a245c167d72be93173"} Nov 24 21:50:56 crc kubenswrapper[4767]: I1124 21:50:56.218630 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" event={"ID":"731cc1e2-6b05-450a-b193-7642ea4674ba","Type":"ContainerStarted","Data":"ba7cc9cd0881f42da4aaa78b01b053979a604b311f807cbb16edc049de6f66a2"} Nov 24 21:51:01 crc kubenswrapper[4767]: I1124 21:51:01.252231 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" event={"ID":"731cc1e2-6b05-450a-b193-7642ea4674ba","Type":"ContainerStarted","Data":"bc96701a5da108b023f8cb1a11fde75d2e97299e2d1d9b28756073b3b8fa3581"} Nov 24 21:51:01 crc kubenswrapper[4767]: I1124 21:51:01.254137 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:51:01 crc kubenswrapper[4767]: I1124 21:51:01.254307 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:51:01 crc kubenswrapper[4767]: I1124 21:51:01.254443 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" event={"ID":"ef716a61-b638-498b-b9b0-46ce4d9b2a4b","Type":"ContainerStarted","Data":"b8842aa6bab21a0829ff3a1def33692bafee8f9f1f08244e4f40df9509f20b55"} Nov 24 21:51:01 crc kubenswrapper[4767]: I1124 21:51:01.312305 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" podStartSLOduration=2.521714243 podStartE2EDuration="7.312287137s" podCreationTimestamp="2025-11-24 21:50:54 +0000 UTC" firstStartedPulling="2025-11-24 21:50:55.590629395 +0000 UTC m=+738.507612767" lastFinishedPulling="2025-11-24 21:51:00.381202289 +0000 UTC m=+743.298185661" observedRunningTime="2025-11-24 21:51:01.293478749 +0000 UTC m=+744.210462141" watchObservedRunningTime="2025-11-24 21:51:01.312287137 +0000 UTC m=+744.229270499" Nov 24 21:51:01 crc kubenswrapper[4767]: I1124 21:51:01.313010 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" podStartSLOduration=1.891986979 podStartE2EDuration="6.313006217s" podCreationTimestamp="2025-11-24 21:50:55 +0000 UTC" firstStartedPulling="2025-11-24 21:50:55.978126225 +0000 UTC m=+738.895109597" lastFinishedPulling="2025-11-24 21:51:00.399145433 +0000 UTC m=+743.316128835" observedRunningTime="2025-11-24 21:51:01.307621713 +0000 UTC m=+744.224605085" watchObservedRunningTime="2025-11-24 21:51:01.313006217 +0000 UTC m=+744.229989589" Nov 24 21:51:05 crc kubenswrapper[4767]: I1124 21:51:05.481294 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:51:05 crc kubenswrapper[4767]: I1124 21:51:05.481627 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:51:05 crc kubenswrapper[4767]: I1124 21:51:05.481688 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:51:05 crc kubenswrapper[4767]: I1124 21:51:05.482459 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c376cc0e5d0460b519433b94fced4d0cba810050689003c18c581dd720c940d"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:51:05 crc kubenswrapper[4767]: I1124 21:51:05.482548 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://5c376cc0e5d0460b519433b94fced4d0cba810050689003c18c581dd720c940d" gracePeriod=600 Nov 24 21:51:06 crc kubenswrapper[4767]: I1124 21:51:06.291165 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="5c376cc0e5d0460b519433b94fced4d0cba810050689003c18c581dd720c940d" exitCode=0 Nov 24 21:51:06 crc kubenswrapper[4767]: I1124 21:51:06.291210 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"5c376cc0e5d0460b519433b94fced4d0cba810050689003c18c581dd720c940d"} Nov 24 21:51:06 crc kubenswrapper[4767]: I1124 21:51:06.292143 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"e688e489a883e7391dd101f5a5646e7206f88c9971f33a2eee17c7b8ffed628d"} Nov 24 21:51:06 crc kubenswrapper[4767]: I1124 21:51:06.292213 4767 scope.go:117] "RemoveContainer" containerID="be42d6aff78e041edb5424f488e6dd92a88fa38a755f0e75223f00653906bf6d" Nov 24 21:51:14 crc kubenswrapper[4767]: I1124 21:51:14.787097 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wrbrz"] Nov 24 21:51:14 crc kubenswrapper[4767]: I1124 21:51:14.787796 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" podUID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" containerName="controller-manager" containerID="cri-o://e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb" gracePeriod=30 Nov 24 21:51:14 crc kubenswrapper[4767]: I1124 21:51:14.863182 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8"] Nov 24 21:51:14 crc kubenswrapper[4767]: I1124 21:51:14.863930 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" podUID="d58a04f0-dcce-4a15-9248-06fe40d8fceb" containerName="route-controller-manager" containerID="cri-o://0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6" gracePeriod=30 Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.218058 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.284072 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.344152 4767 generic.go:334] "Generic (PLEG): container finished" podID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" containerID="e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb" exitCode=0 Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.344200 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.344219 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" event={"ID":"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759","Type":"ContainerDied","Data":"e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb"} Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.344248 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wrbrz" event={"ID":"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759","Type":"ContainerDied","Data":"595a32c30272da94613a54eb329de4c79c646bea87399b9ea79ef7045e751ee6"} Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.344281 4767 scope.go:117] "RemoveContainer" containerID="e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.347024 4767 generic.go:334] "Generic (PLEG): container finished" podID="d58a04f0-dcce-4a15-9248-06fe40d8fceb" containerID="0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6" exitCode=0 Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.347066 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" event={"ID":"d58a04f0-dcce-4a15-9248-06fe40d8fceb","Type":"ContainerDied","Data":"0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6"} Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.347094 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" event={"ID":"d58a04f0-dcce-4a15-9248-06fe40d8fceb","Type":"ContainerDied","Data":"5badbc8209128a80466bef3436358bf63c4fdffb24a65984578ca4f7f4b9dd9a"} Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.347142 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.360938 4767 scope.go:117] "RemoveContainer" containerID="e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb" Nov 24 21:51:15 crc kubenswrapper[4767]: E1124 21:51:15.361278 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb\": container with ID starting with e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb not found: ID does not exist" containerID="e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.361316 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb"} err="failed to get container status \"e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb\": rpc error: code = NotFound desc = could not find container \"e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb\": container with ID starting with e4f63f5d7b616af0f2a3bb3f9a43fa3bbe7b83adc2cf4ebad64ee6d1bed703bb not found: ID does not exist" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.361342 4767 scope.go:117] "RemoveContainer" containerID="0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370635 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58a04f0-dcce-4a15-9248-06fe40d8fceb-serving-cert\") pod \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370672 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-client-ca\") pod \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370732 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l56bt\" (UniqueName: \"kubernetes.io/projected/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-kube-api-access-l56bt\") pod \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370757 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-client-ca\") pod \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370780 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-proxy-ca-bundles\") pod \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370816 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgjzc\" (UniqueName: \"kubernetes.io/projected/d58a04f0-dcce-4a15-9248-06fe40d8fceb-kube-api-access-fgjzc\") pod \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370836 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-serving-cert\") pod \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370870 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-config\") pod \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\" (UID: \"d58a04f0-dcce-4a15-9248-06fe40d8fceb\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.370895 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-config\") pod \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\" (UID: \"e7f7d9e2-58aa-4606-bca2-0e02f7a7f759\") " Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.372773 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-client-ca" (OuterVolumeSpecName: "client-ca") pod "d58a04f0-dcce-4a15-9248-06fe40d8fceb" (UID: "d58a04f0-dcce-4a15-9248-06fe40d8fceb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.373323 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-client-ca" (OuterVolumeSpecName: "client-ca") pod "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" (UID: "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.374501 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-config" (OuterVolumeSpecName: "config") pod "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" (UID: "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.374984 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" (UID: "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.375405 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-config" (OuterVolumeSpecName: "config") pod "d58a04f0-dcce-4a15-9248-06fe40d8fceb" (UID: "d58a04f0-dcce-4a15-9248-06fe40d8fceb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.375845 4767 scope.go:117] "RemoveContainer" containerID="0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6" Nov 24 21:51:15 crc kubenswrapper[4767]: E1124 21:51:15.376206 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6\": container with ID starting with 0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6 not found: ID does not exist" containerID="0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.376267 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6"} err="failed to get container status \"0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6\": rpc error: code = NotFound desc = could not find container \"0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6\": container with ID starting with 0b8f99c222776bd458180bf19578af87164bbd75068dcaea4be325777ab787f6 not found: ID does not exist" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.378640 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" (UID: "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.378810 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d58a04f0-dcce-4a15-9248-06fe40d8fceb-kube-api-access-fgjzc" (OuterVolumeSpecName: "kube-api-access-fgjzc") pod "d58a04f0-dcce-4a15-9248-06fe40d8fceb" (UID: "d58a04f0-dcce-4a15-9248-06fe40d8fceb"). InnerVolumeSpecName "kube-api-access-fgjzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.379540 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58a04f0-dcce-4a15-9248-06fe40d8fceb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d58a04f0-dcce-4a15-9248-06fe40d8fceb" (UID: "d58a04f0-dcce-4a15-9248-06fe40d8fceb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.380568 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-kube-api-access-l56bt" (OuterVolumeSpecName: "kube-api-access-l56bt") pod "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" (UID: "e7f7d9e2-58aa-4606-bca2-0e02f7a7f759"). InnerVolumeSpecName "kube-api-access-l56bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472003 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472045 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgjzc\" (UniqueName: \"kubernetes.io/projected/d58a04f0-dcce-4a15-9248-06fe40d8fceb-kube-api-access-fgjzc\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472061 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472072 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472084 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472096 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58a04f0-dcce-4a15-9248-06fe40d8fceb-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472108 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472120 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l56bt\" (UniqueName: \"kubernetes.io/projected/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759-kube-api-access-l56bt\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.472130 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d58a04f0-dcce-4a15-9248-06fe40d8fceb-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.539393 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-549d689cb8-wpm9x" Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.686398 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8"] Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.688655 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hwxt8"] Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.700879 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wrbrz"] Nov 24 21:51:15 crc kubenswrapper[4767]: I1124 21:51:15.704960 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wrbrz"] Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.301444 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65998cfc6f-t6qs7"] Nov 24 21:51:16 crc kubenswrapper[4767]: E1124 21:51:16.302744 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" containerName="controller-manager" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.302872 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" containerName="controller-manager" Nov 24 21:51:16 crc kubenswrapper[4767]: E1124 21:51:16.302959 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d58a04f0-dcce-4a15-9248-06fe40d8fceb" containerName="route-controller-manager" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.303023 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58a04f0-dcce-4a15-9248-06fe40d8fceb" containerName="route-controller-manager" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.303193 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d58a04f0-dcce-4a15-9248-06fe40d8fceb" containerName="route-controller-manager" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.303321 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" containerName="controller-manager" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.303979 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.313747 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.321353 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d58a04f0-dcce-4a15-9248-06fe40d8fceb" path="/var/lib/kubelet/pods/d58a04f0-dcce-4a15-9248-06fe40d8fceb/volumes" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.322068 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f7d9e2-58aa-4606-bca2-0e02f7a7f759" path="/var/lib/kubelet/pods/e7f7d9e2-58aa-4606-bca2-0e02f7a7f759/volumes" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.322872 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.323128 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.323368 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.323529 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.323841 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65998cfc6f-t6qs7"] Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.324013 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.343594 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.374345 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj"] Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.391993 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-config\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.392259 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfcg7\" (UniqueName: \"kubernetes.io/projected/9ed81e59-1037-4144-9cb8-5c808b90a2f2-kube-api-access-vfcg7\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.392374 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-proxy-ca-bundles\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.392466 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-client-ca\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.392552 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed81e59-1037-4144-9cb8-5c808b90a2f2-serving-cert\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.392008 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.397322 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.397664 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.397890 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.398364 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.401952 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.402333 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.404669 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj"] Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.493137 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/731345dc-150e-4971-80f7-7192c29d5c53-serving-cert\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.493186 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731345dc-150e-4971-80f7-7192c29d5c53-config\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.493212 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/731345dc-150e-4971-80f7-7192c29d5c53-client-ca\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.493238 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-config\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.493267 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfcg7\" (UniqueName: \"kubernetes.io/projected/9ed81e59-1037-4144-9cb8-5c808b90a2f2-kube-api-access-vfcg7\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.493301 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7bnj\" (UniqueName: \"kubernetes.io/projected/731345dc-150e-4971-80f7-7192c29d5c53-kube-api-access-q7bnj\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.493318 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-proxy-ca-bundles\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.494656 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-proxy-ca-bundles\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.494704 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-config\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.494745 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-client-ca\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.494789 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed81e59-1037-4144-9cb8-5c808b90a2f2-serving-cert\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.496062 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ed81e59-1037-4144-9cb8-5c808b90a2f2-client-ca\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.513005 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ed81e59-1037-4144-9cb8-5c808b90a2f2-serving-cert\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.519761 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfcg7\" (UniqueName: \"kubernetes.io/projected/9ed81e59-1037-4144-9cb8-5c808b90a2f2-kube-api-access-vfcg7\") pod \"controller-manager-65998cfc6f-t6qs7\" (UID: \"9ed81e59-1037-4144-9cb8-5c808b90a2f2\") " pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.595702 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/731345dc-150e-4971-80f7-7192c29d5c53-serving-cert\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.595987 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731345dc-150e-4971-80f7-7192c29d5c53-config\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.596017 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/731345dc-150e-4971-80f7-7192c29d5c53-client-ca\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.596055 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7bnj\" (UniqueName: \"kubernetes.io/projected/731345dc-150e-4971-80f7-7192c29d5c53-kube-api-access-q7bnj\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.596859 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/731345dc-150e-4971-80f7-7192c29d5c53-client-ca\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.597079 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731345dc-150e-4971-80f7-7192c29d5c53-config\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.607870 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/731345dc-150e-4971-80f7-7192c29d5c53-serving-cert\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.625189 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7bnj\" (UniqueName: \"kubernetes.io/projected/731345dc-150e-4971-80f7-7192c29d5c53-kube-api-access-q7bnj\") pod \"route-controller-manager-566c888fb6-clbcj\" (UID: \"731345dc-150e-4971-80f7-7192c29d5c53\") " pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.628573 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.722079 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:16 crc kubenswrapper[4767]: I1124 21:51:16.915494 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65998cfc6f-t6qs7"] Nov 24 21:51:16 crc kubenswrapper[4767]: W1124 21:51:16.931674 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed81e59_1037_4144_9cb8_5c808b90a2f2.slice/crio-883de7ab4347a7977940dd20fe2c935931684c8cbdf64c9c20f950b3f3611de5 WatchSource:0}: Error finding container 883de7ab4347a7977940dd20fe2c935931684c8cbdf64c9c20f950b3f3611de5: Status 404 returned error can't find the container with id 883de7ab4347a7977940dd20fe2c935931684c8cbdf64c9c20f950b3f3611de5 Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.214639 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj"] Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.362834 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" event={"ID":"9ed81e59-1037-4144-9cb8-5c808b90a2f2","Type":"ContainerStarted","Data":"166641f80ecb611436d10dca4d8e8976e207e83c2501075226e887133b0b494e"} Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.362878 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" event={"ID":"9ed81e59-1037-4144-9cb8-5c808b90a2f2","Type":"ContainerStarted","Data":"883de7ab4347a7977940dd20fe2c935931684c8cbdf64c9c20f950b3f3611de5"} Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.363434 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.365930 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" event={"ID":"731345dc-150e-4971-80f7-7192c29d5c53","Type":"ContainerStarted","Data":"0728d411f1ee87e2bfdf5ce81827bd01ee35b902b71395ec9f0512aff7ac5224"} Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.365957 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" event={"ID":"731345dc-150e-4971-80f7-7192c29d5c53","Type":"ContainerStarted","Data":"471d509a91dc0e4645d1f358cace2c081abc3ca3193873accef68da029f76448"} Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.366219 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.367154 4767 patch_prober.go:28] interesting pod/route-controller-manager-566c888fb6-clbcj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.367190 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" podUID="731345dc-150e-4971-80f7-7192c29d5c53" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.369015 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.380886 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65998cfc6f-t6qs7" podStartSLOduration=1.380871987 podStartE2EDuration="1.380871987s" podCreationTimestamp="2025-11-24 21:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:51:17.376907114 +0000 UTC m=+760.293890486" watchObservedRunningTime="2025-11-24 21:51:17.380871987 +0000 UTC m=+760.297855359" Nov 24 21:51:17 crc kubenswrapper[4767]: I1124 21:51:17.413120 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" podStartSLOduration=1.41309864 podStartE2EDuration="1.41309864s" podCreationTimestamp="2025-11-24 21:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:51:17.40893615 +0000 UTC m=+760.325919532" watchObservedRunningTime="2025-11-24 21:51:17.41309864 +0000 UTC m=+760.330082022" Nov 24 21:51:18 crc kubenswrapper[4767]: I1124 21:51:18.377919 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-566c888fb6-clbcj" Nov 24 21:51:22 crc kubenswrapper[4767]: I1124 21:51:22.270631 4767 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.436140 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rlfjx"] Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.441593 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.450975 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rlfjx"] Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.630990 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-utilities\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.631063 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hl8b\" (UniqueName: \"kubernetes.io/projected/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-kube-api-access-8hl8b\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.631196 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-catalog-content\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.732907 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hl8b\" (UniqueName: \"kubernetes.io/projected/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-kube-api-access-8hl8b\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.732977 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-catalog-content\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.733041 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-utilities\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.733588 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-catalog-content\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.733626 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-utilities\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.752018 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hl8b\" (UniqueName: \"kubernetes.io/projected/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-kube-api-access-8hl8b\") pod \"redhat-operators-rlfjx\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:27 crc kubenswrapper[4767]: I1124 21:51:27.772820 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:28 crc kubenswrapper[4767]: I1124 21:51:28.206869 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rlfjx"] Nov 24 21:51:28 crc kubenswrapper[4767]: I1124 21:51:28.439648 4767 generic.go:334] "Generic (PLEG): container finished" podID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerID="c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37" exitCode=0 Nov 24 21:51:28 crc kubenswrapper[4767]: I1124 21:51:28.439821 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlfjx" event={"ID":"5df6b8c1-e11e-4279-b0ea-5ba155d6950b","Type":"ContainerDied","Data":"c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37"} Nov 24 21:51:28 crc kubenswrapper[4767]: I1124 21:51:28.440095 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlfjx" event={"ID":"5df6b8c1-e11e-4279-b0ea-5ba155d6950b","Type":"ContainerStarted","Data":"78dadfabd8ec068a3c12f44624be42cf0320290caa4703a7fdca53d836259da8"} Nov 24 21:51:29 crc kubenswrapper[4767]: I1124 21:51:29.450815 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlfjx" event={"ID":"5df6b8c1-e11e-4279-b0ea-5ba155d6950b","Type":"ContainerStarted","Data":"ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24"} Nov 24 21:51:30 crc kubenswrapper[4767]: I1124 21:51:30.460717 4767 generic.go:334] "Generic (PLEG): container finished" podID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerID="ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24" exitCode=0 Nov 24 21:51:30 crc kubenswrapper[4767]: I1124 21:51:30.460751 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlfjx" event={"ID":"5df6b8c1-e11e-4279-b0ea-5ba155d6950b","Type":"ContainerDied","Data":"ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24"} Nov 24 21:51:31 crc kubenswrapper[4767]: I1124 21:51:31.474324 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlfjx" event={"ID":"5df6b8c1-e11e-4279-b0ea-5ba155d6950b","Type":"ContainerStarted","Data":"70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194"} Nov 24 21:51:31 crc kubenswrapper[4767]: I1124 21:51:31.497442 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rlfjx" podStartSLOduration=2.101001614 podStartE2EDuration="4.49742415s" podCreationTimestamp="2025-11-24 21:51:27 +0000 UTC" firstStartedPulling="2025-11-24 21:51:28.441384636 +0000 UTC m=+771.358368018" lastFinishedPulling="2025-11-24 21:51:30.837807182 +0000 UTC m=+773.754790554" observedRunningTime="2025-11-24 21:51:31.49392358 +0000 UTC m=+774.410907052" watchObservedRunningTime="2025-11-24 21:51:31.49742415 +0000 UTC m=+774.414407532" Nov 24 21:51:35 crc kubenswrapper[4767]: I1124 21:51:35.221873 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7754dcd9b8-4f27l" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.101409 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9vtpb"] Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.103658 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.110504 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f"] Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.111552 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.116337 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2fchv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.116657 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.117082 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.123981 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.129166 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f"] Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.197889 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rtxm7"] Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.199340 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.201371 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.201524 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.202062 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-87jpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.204175 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.220178 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-5z6rv"] Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.221089 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.226021 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.232797 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5z6rv"] Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273303 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a335b09c-eb27-4f81-92fd-c8e8cf54bc29-cert\") pod \"frr-k8s-webhook-server-6998585d5-7dw7f\" (UID: \"a335b09c-eb27-4f81-92fd-c8e8cf54bc29\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273357 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-reloader\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273434 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-metrics\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273455 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/027827e3-0a39-466b-9b89-304593d0c558-kube-api-access-sxpkb\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273474 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-frr-sockets\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273510 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/027827e3-0a39-466b-9b89-304593d0c558-frr-startup\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273530 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027827e3-0a39-466b-9b89-304593d0c558-metrics-certs\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273548 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hln86\" (UniqueName: \"kubernetes.io/projected/a335b09c-eb27-4f81-92fd-c8e8cf54bc29-kube-api-access-hln86\") pod \"frr-k8s-webhook-server-6998585d5-7dw7f\" (UID: \"a335b09c-eb27-4f81-92fd-c8e8cf54bc29\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.273584 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-frr-conf\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.374686 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/027827e3-0a39-466b-9b89-304593d0c558-frr-startup\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375020 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76wp2\" (UniqueName: \"kubernetes.io/projected/fe9b8380-26eb-4029-aff7-25244660b6be-kube-api-access-76wp2\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375113 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027827e3-0a39-466b-9b89-304593d0c558-metrics-certs\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375199 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hln86\" (UniqueName: \"kubernetes.io/projected/a335b09c-eb27-4f81-92fd-c8e8cf54bc29-kube-api-access-hln86\") pod \"frr-k8s-webhook-server-6998585d5-7dw7f\" (UID: \"a335b09c-eb27-4f81-92fd-c8e8cf54bc29\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375300 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-frr-conf\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: E1124 21:51:36.375229 4767 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375413 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a335b09c-eb27-4f81-92fd-c8e8cf54bc29-cert\") pod \"frr-k8s-webhook-server-6998585d5-7dw7f\" (UID: \"a335b09c-eb27-4f81-92fd-c8e8cf54bc29\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: E1124 21:51:36.375492 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/027827e3-0a39-466b-9b89-304593d0c558-metrics-certs podName:027827e3-0a39-466b-9b89-304593d0c558 nodeName:}" failed. No retries permitted until 2025-11-24 21:51:36.875467807 +0000 UTC m=+779.792451169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/027827e3-0a39-466b-9b89-304593d0c558-metrics-certs") pod "frr-k8s-9vtpb" (UID: "027827e3-0a39-466b-9b89-304593d0c558") : secret "frr-k8s-certs-secret" not found Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375538 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-reloader\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375565 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fe9b8380-26eb-4029-aff7-25244660b6be-metallb-excludel2\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375615 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375643 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-metrics-certs\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375671 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42c8f455-18a7-42b3-ace1-f84396927f3f-cert\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375695 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42c8f455-18a7-42b3-ace1-f84396927f3f-metrics-certs\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375721 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-metrics\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375719 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/027827e3-0a39-466b-9b89-304593d0c558-frr-startup\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375785 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-frr-conf\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375889 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/027827e3-0a39-466b-9b89-304593d0c558-kube-api-access-sxpkb\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375908 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-reloader\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375933 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-frr-sockets\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.375979 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57hn4\" (UniqueName: \"kubernetes.io/projected/42c8f455-18a7-42b3-ace1-f84396927f3f-kube-api-access-57hn4\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.376057 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-metrics\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.376251 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/027827e3-0a39-466b-9b89-304593d0c558-frr-sockets\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.381732 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a335b09c-eb27-4f81-92fd-c8e8cf54bc29-cert\") pod \"frr-k8s-webhook-server-6998585d5-7dw7f\" (UID: \"a335b09c-eb27-4f81-92fd-c8e8cf54bc29\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.390159 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hln86\" (UniqueName: \"kubernetes.io/projected/a335b09c-eb27-4f81-92fd-c8e8cf54bc29-kube-api-access-hln86\") pod \"frr-k8s-webhook-server-6998585d5-7dw7f\" (UID: \"a335b09c-eb27-4f81-92fd-c8e8cf54bc29\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.400698 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/027827e3-0a39-466b-9b89-304593d0c558-kube-api-access-sxpkb\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.431716 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.477027 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76wp2\" (UniqueName: \"kubernetes.io/projected/fe9b8380-26eb-4029-aff7-25244660b6be-kube-api-access-76wp2\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.477106 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fe9b8380-26eb-4029-aff7-25244660b6be-metallb-excludel2\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.477133 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.477151 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-metrics-certs\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.477170 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42c8f455-18a7-42b3-ace1-f84396927f3f-cert\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.477187 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42c8f455-18a7-42b3-ace1-f84396927f3f-metrics-certs\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.477217 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57hn4\" (UniqueName: \"kubernetes.io/projected/42c8f455-18a7-42b3-ace1-f84396927f3f-kube-api-access-57hn4\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.478289 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fe9b8380-26eb-4029-aff7-25244660b6be-metallb-excludel2\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: E1124 21:51:36.478368 4767 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 21:51:36 crc kubenswrapper[4767]: E1124 21:51:36.478407 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist podName:fe9b8380-26eb-4029-aff7-25244660b6be nodeName:}" failed. No retries permitted until 2025-11-24 21:51:36.978393664 +0000 UTC m=+779.895377036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist") pod "speaker-rtxm7" (UID: "fe9b8380-26eb-4029-aff7-25244660b6be") : secret "metallb-memberlist" not found Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.481707 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42c8f455-18a7-42b3-ace1-f84396927f3f-metrics-certs\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.481873 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-metrics-certs\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.488704 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42c8f455-18a7-42b3-ace1-f84396927f3f-cert\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.495859 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57hn4\" (UniqueName: \"kubernetes.io/projected/42c8f455-18a7-42b3-ace1-f84396927f3f-kube-api-access-57hn4\") pod \"controller-6c7b4b5f48-5z6rv\" (UID: \"42c8f455-18a7-42b3-ace1-f84396927f3f\") " pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.501488 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76wp2\" (UniqueName: \"kubernetes.io/projected/fe9b8380-26eb-4029-aff7-25244660b6be-kube-api-access-76wp2\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.537332 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.876380 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f"] Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.882083 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027827e3-0a39-466b-9b89-304593d0c558-metrics-certs\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: W1124 21:51:36.883627 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda335b09c_eb27_4f81_92fd_c8e8cf54bc29.slice/crio-cde850e93d3856bac1a712daf5bb59229671711a28a99942b40d7107e0e00877 WatchSource:0}: Error finding container cde850e93d3856bac1a712daf5bb59229671711a28a99942b40d7107e0e00877: Status 404 returned error can't find the container with id cde850e93d3856bac1a712daf5bb59229671711a28a99942b40d7107e0e00877 Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.889414 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027827e3-0a39-466b-9b89-304593d0c558-metrics-certs\") pod \"frr-k8s-9vtpb\" (UID: \"027827e3-0a39-466b-9b89-304593d0c558\") " pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.983476 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:36 crc kubenswrapper[4767]: E1124 21:51:36.983665 4767 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 21:51:36 crc kubenswrapper[4767]: E1124 21:51:36.983736 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist podName:fe9b8380-26eb-4029-aff7-25244660b6be nodeName:}" failed. No retries permitted until 2025-11-24 21:51:37.983719187 +0000 UTC m=+780.900702559 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist") pod "speaker-rtxm7" (UID: "fe9b8380-26eb-4029-aff7-25244660b6be") : secret "metallb-memberlist" not found Nov 24 21:51:36 crc kubenswrapper[4767]: I1124 21:51:36.993783 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5z6rv"] Nov 24 21:51:36 crc kubenswrapper[4767]: W1124 21:51:36.999761 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c8f455_18a7_42b3_ace1_f84396927f3f.slice/crio-5b4a575c042484cb76a56dbd5430853b3f9a2247eaa78fd5101c3235c5c0db0b WatchSource:0}: Error finding container 5b4a575c042484cb76a56dbd5430853b3f9a2247eaa78fd5101c3235c5c0db0b: Status 404 returned error can't find the container with id 5b4a575c042484cb76a56dbd5430853b3f9a2247eaa78fd5101c3235c5c0db0b Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.030730 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.511176 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerStarted","Data":"080e6b3fbb0ba12e4f6a05c7c018aa036fa83fc5d3b08ee14732e7dc434122f4"} Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.512422 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" event={"ID":"a335b09c-eb27-4f81-92fd-c8e8cf54bc29","Type":"ContainerStarted","Data":"cde850e93d3856bac1a712daf5bb59229671711a28a99942b40d7107e0e00877"} Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.514804 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5z6rv" event={"ID":"42c8f455-18a7-42b3-ace1-f84396927f3f","Type":"ContainerStarted","Data":"650fb63211556c8a75f7f4350e028ec53b322ec031f4a2358224d9fc8de1c6bf"} Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.514829 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5z6rv" event={"ID":"42c8f455-18a7-42b3-ace1-f84396927f3f","Type":"ContainerStarted","Data":"f71d5237e08a8177854dca5208ed0418151a5c18f79e44be7c1d35d82deb2373"} Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.514839 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5z6rv" event={"ID":"42c8f455-18a7-42b3-ace1-f84396927f3f","Type":"ContainerStarted","Data":"5b4a575c042484cb76a56dbd5430853b3f9a2247eaa78fd5101c3235c5c0db0b"} Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.514959 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.536804 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-5z6rv" podStartSLOduration=1.5367806320000001 podStartE2EDuration="1.536780632s" podCreationTimestamp="2025-11-24 21:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:51:37.534150437 +0000 UTC m=+780.451133809" watchObservedRunningTime="2025-11-24 21:51:37.536780632 +0000 UTC m=+780.453764004" Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.773631 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.773953 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.866970 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:37 crc kubenswrapper[4767]: I1124 21:51:37.999430 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:38 crc kubenswrapper[4767]: I1124 21:51:38.003780 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe9b8380-26eb-4029-aff7-25244660b6be-memberlist\") pod \"speaker-rtxm7\" (UID: \"fe9b8380-26eb-4029-aff7-25244660b6be\") " pod="metallb-system/speaker-rtxm7" Nov 24 21:51:38 crc kubenswrapper[4767]: I1124 21:51:38.012542 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rtxm7" Nov 24 21:51:38 crc kubenswrapper[4767]: I1124 21:51:38.529143 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rtxm7" event={"ID":"fe9b8380-26eb-4029-aff7-25244660b6be","Type":"ContainerStarted","Data":"c978d2fe6ca6f1f52f0a58bf5cea99b62ed7f593ef8dac7958a1f9e55cbc5d23"} Nov 24 21:51:38 crc kubenswrapper[4767]: I1124 21:51:38.529501 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rtxm7" event={"ID":"fe9b8380-26eb-4029-aff7-25244660b6be","Type":"ContainerStarted","Data":"c803cd4408232dd89c58af4201eeb6bab83a4514f2eb3896b9c0a5812f834de4"} Nov 24 21:51:38 crc kubenswrapper[4767]: I1124 21:51:38.579831 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:38 crc kubenswrapper[4767]: I1124 21:51:38.658049 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rlfjx"] Nov 24 21:51:39 crc kubenswrapper[4767]: I1124 21:51:39.536874 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rtxm7" event={"ID":"fe9b8380-26eb-4029-aff7-25244660b6be","Type":"ContainerStarted","Data":"1f5160875fb77471da6523ab69400b124472dae947751183702f4a0defdd302e"} Nov 24 21:51:39 crc kubenswrapper[4767]: I1124 21:51:39.553693 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rtxm7" podStartSLOduration=3.553677984 podStartE2EDuration="3.553677984s" podCreationTimestamp="2025-11-24 21:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:51:39.55070297 +0000 UTC m=+782.467686342" watchObservedRunningTime="2025-11-24 21:51:39.553677984 +0000 UTC m=+782.470661356" Nov 24 21:51:40 crc kubenswrapper[4767]: I1124 21:51:40.543079 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rlfjx" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="registry-server" containerID="cri-o://70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194" gracePeriod=2 Nov 24 21:51:40 crc kubenswrapper[4767]: I1124 21:51:40.543850 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rtxm7" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.051188 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.058445 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hl8b\" (UniqueName: \"kubernetes.io/projected/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-kube-api-access-8hl8b\") pod \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.058543 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-catalog-content\") pod \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.058600 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-utilities\") pod \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\" (UID: \"5df6b8c1-e11e-4279-b0ea-5ba155d6950b\") " Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.059650 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-utilities" (OuterVolumeSpecName: "utilities") pod "5df6b8c1-e11e-4279-b0ea-5ba155d6950b" (UID: "5df6b8c1-e11e-4279-b0ea-5ba155d6950b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.067368 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-kube-api-access-8hl8b" (OuterVolumeSpecName: "kube-api-access-8hl8b") pod "5df6b8c1-e11e-4279-b0ea-5ba155d6950b" (UID: "5df6b8c1-e11e-4279-b0ea-5ba155d6950b"). InnerVolumeSpecName "kube-api-access-8hl8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.147249 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5df6b8c1-e11e-4279-b0ea-5ba155d6950b" (UID: "5df6b8c1-e11e-4279-b0ea-5ba155d6950b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.159835 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hl8b\" (UniqueName: \"kubernetes.io/projected/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-kube-api-access-8hl8b\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.159868 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.159879 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df6b8c1-e11e-4279-b0ea-5ba155d6950b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.550156 4767 generic.go:334] "Generic (PLEG): container finished" podID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerID="70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194" exitCode=0 Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.550238 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlfjx" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.550299 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlfjx" event={"ID":"5df6b8c1-e11e-4279-b0ea-5ba155d6950b","Type":"ContainerDied","Data":"70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194"} Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.550333 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlfjx" event={"ID":"5df6b8c1-e11e-4279-b0ea-5ba155d6950b","Type":"ContainerDied","Data":"78dadfabd8ec068a3c12f44624be42cf0320290caa4703a7fdca53d836259da8"} Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.550351 4767 scope.go:117] "RemoveContainer" containerID="70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194" Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.579210 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rlfjx"] Nov 24 21:51:41 crc kubenswrapper[4767]: I1124 21:51:41.582805 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rlfjx"] Nov 24 21:51:42 crc kubenswrapper[4767]: I1124 21:51:42.323999 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" path="/var/lib/kubelet/pods/5df6b8c1-e11e-4279-b0ea-5ba155d6950b/volumes" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.509630 4767 scope.go:117] "RemoveContainer" containerID="ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.577187 4767 scope.go:117] "RemoveContainer" containerID="c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.606682 4767 scope.go:117] "RemoveContainer" containerID="70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194" Nov 24 21:51:43 crc kubenswrapper[4767]: E1124 21:51:43.607281 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194\": container with ID starting with 70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194 not found: ID does not exist" containerID="70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.607318 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194"} err="failed to get container status \"70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194\": rpc error: code = NotFound desc = could not find container \"70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194\": container with ID starting with 70eff4fcb869a6e5c82c3047959c64e5e0db4659c2df6f5d7b26338b36960194 not found: ID does not exist" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.607340 4767 scope.go:117] "RemoveContainer" containerID="ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24" Nov 24 21:51:43 crc kubenswrapper[4767]: E1124 21:51:43.607787 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24\": container with ID starting with ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24 not found: ID does not exist" containerID="ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.607821 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24"} err="failed to get container status \"ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24\": rpc error: code = NotFound desc = could not find container \"ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24\": container with ID starting with ca1a4f49631b944c79dc2dc84706860239981d26c3a4a4bc506b2c440b9c2d24 not found: ID does not exist" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.607840 4767 scope.go:117] "RemoveContainer" containerID="c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37" Nov 24 21:51:43 crc kubenswrapper[4767]: E1124 21:51:43.608208 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37\": container with ID starting with c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37 not found: ID does not exist" containerID="c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37" Nov 24 21:51:43 crc kubenswrapper[4767]: I1124 21:51:43.608332 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37"} err="failed to get container status \"c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37\": rpc error: code = NotFound desc = could not find container \"c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37\": container with ID starting with c6a10563356ed4d917b1ccbdb7d07c92cfdbb65871ad74d17092be6a6b19bd37 not found: ID does not exist" Nov 24 21:51:44 crc kubenswrapper[4767]: I1124 21:51:44.597425 4767 generic.go:334] "Generic (PLEG): container finished" podID="027827e3-0a39-466b-9b89-304593d0c558" containerID="7b6efa731d3868a30927e8bec50669d8014605503c238c19bd214e35950f7e3d" exitCode=0 Nov 24 21:51:44 crc kubenswrapper[4767]: I1124 21:51:44.597512 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerDied","Data":"7b6efa731d3868a30927e8bec50669d8014605503c238c19bd214e35950f7e3d"} Nov 24 21:51:44 crc kubenswrapper[4767]: I1124 21:51:44.600819 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" event={"ID":"a335b09c-eb27-4f81-92fd-c8e8cf54bc29","Type":"ContainerStarted","Data":"aa06e6e1f1d979ace5abed641951ac20741cc091eaa4f79bed7d9a964c32f68e"} Nov 24 21:51:44 crc kubenswrapper[4767]: I1124 21:51:44.601001 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:44 crc kubenswrapper[4767]: I1124 21:51:44.656183 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" podStartSLOduration=1.896804516 podStartE2EDuration="8.656154329s" podCreationTimestamp="2025-11-24 21:51:36 +0000 UTC" firstStartedPulling="2025-11-24 21:51:36.885210415 +0000 UTC m=+779.802193787" lastFinishedPulling="2025-11-24 21:51:43.644560228 +0000 UTC m=+786.561543600" observedRunningTime="2025-11-24 21:51:44.649404208 +0000 UTC m=+787.566387670" watchObservedRunningTime="2025-11-24 21:51:44.656154329 +0000 UTC m=+787.573137741" Nov 24 21:51:45 crc kubenswrapper[4767]: I1124 21:51:45.612795 4767 generic.go:334] "Generic (PLEG): container finished" podID="027827e3-0a39-466b-9b89-304593d0c558" containerID="3d440a12bddfc78c52cb7cf8b78e62080184b9150a24bf2921793ba2af5bc9fd" exitCode=0 Nov 24 21:51:45 crc kubenswrapper[4767]: I1124 21:51:45.612934 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerDied","Data":"3d440a12bddfc78c52cb7cf8b78e62080184b9150a24bf2921793ba2af5bc9fd"} Nov 24 21:51:46 crc kubenswrapper[4767]: I1124 21:51:46.622061 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerDied","Data":"a1af6b39e323b3c04d51f34d0964b08c39ae13881ed98d704dbb9724a6e54ffd"} Nov 24 21:51:46 crc kubenswrapper[4767]: I1124 21:51:46.621928 4767 generic.go:334] "Generic (PLEG): container finished" podID="027827e3-0a39-466b-9b89-304593d0c558" containerID="a1af6b39e323b3c04d51f34d0964b08c39ae13881ed98d704dbb9724a6e54ffd" exitCode=0 Nov 24 21:51:47 crc kubenswrapper[4767]: I1124 21:51:47.634882 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerStarted","Data":"5aeb2e08ac6c294f45ce16961eb3e8806adfc52af8fc5666e807d5adb3395d29"} Nov 24 21:51:47 crc kubenswrapper[4767]: I1124 21:51:47.635226 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerStarted","Data":"dc995c99195dcb87294410d030d9a02c040b869653810d845b78e711fa51d555"} Nov 24 21:51:47 crc kubenswrapper[4767]: I1124 21:51:47.635240 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerStarted","Data":"a109da2aaa3d2da4a23ff39ee1740040b89407ce5739308772b1db30ce262b18"} Nov 24 21:51:47 crc kubenswrapper[4767]: I1124 21:51:47.635251 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerStarted","Data":"c045415825177a49d3d87b4190711844b8602834d692867a969e64b12069cc43"} Nov 24 21:51:47 crc kubenswrapper[4767]: I1124 21:51:47.635262 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerStarted","Data":"a68e30624a81ea2d5e7a13c8b33cd544555e35353eb26a1c3f1f23621c655fdc"} Nov 24 21:51:48 crc kubenswrapper[4767]: I1124 21:51:48.016400 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rtxm7" Nov 24 21:51:48 crc kubenswrapper[4767]: I1124 21:51:48.648217 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9vtpb" event={"ID":"027827e3-0a39-466b-9b89-304593d0c558","Type":"ContainerStarted","Data":"3b8af2940788c4f75614ff69f7ad1299387964a332530a8754ba89fab66404d4"} Nov 24 21:51:48 crc kubenswrapper[4767]: I1124 21:51:48.648704 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:48 crc kubenswrapper[4767]: I1124 21:51:48.682008 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9vtpb" podStartSLOduration=6.337818684 podStartE2EDuration="12.681981489s" podCreationTimestamp="2025-11-24 21:51:36 +0000 UTC" firstStartedPulling="2025-11-24 21:51:37.273954833 +0000 UTC m=+780.190938235" lastFinishedPulling="2025-11-24 21:51:43.618117648 +0000 UTC m=+786.535101040" observedRunningTime="2025-11-24 21:51:48.681334711 +0000 UTC m=+791.598318143" watchObservedRunningTime="2025-11-24 21:51:48.681981489 +0000 UTC m=+791.598964891" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.298428 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sfphm"] Nov 24 21:51:51 crc kubenswrapper[4767]: E1124 21:51:51.299674 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="extract-utilities" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.299774 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="extract-utilities" Nov 24 21:51:51 crc kubenswrapper[4767]: E1124 21:51:51.299863 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="extract-content" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.299928 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="extract-content" Nov 24 21:51:51 crc kubenswrapper[4767]: E1124 21:51:51.300014 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="registry-server" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.300078 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="registry-server" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.300341 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df6b8c1-e11e-4279-b0ea-5ba155d6950b" containerName="registry-server" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.300926 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.304306 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mpsf9" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.304703 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.306618 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.309593 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29lc\" (UniqueName: \"kubernetes.io/projected/069a00af-68eb-41b7-9bcf-5209562d25d8-kube-api-access-p29lc\") pod \"openstack-operator-index-sfphm\" (UID: \"069a00af-68eb-41b7-9bcf-5209562d25d8\") " pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.336380 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sfphm"] Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.411418 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p29lc\" (UniqueName: \"kubernetes.io/projected/069a00af-68eb-41b7-9bcf-5209562d25d8-kube-api-access-p29lc\") pod \"openstack-operator-index-sfphm\" (UID: \"069a00af-68eb-41b7-9bcf-5209562d25d8\") " pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.440166 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p29lc\" (UniqueName: \"kubernetes.io/projected/069a00af-68eb-41b7-9bcf-5209562d25d8-kube-api-access-p29lc\") pod \"openstack-operator-index-sfphm\" (UID: \"069a00af-68eb-41b7-9bcf-5209562d25d8\") " pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:51:51 crc kubenswrapper[4767]: I1124 21:51:51.620083 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:51:52 crc kubenswrapper[4767]: I1124 21:51:52.032111 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:52 crc kubenswrapper[4767]: I1124 21:51:52.077162 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:51:52 crc kubenswrapper[4767]: I1124 21:51:52.145060 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sfphm"] Nov 24 21:51:52 crc kubenswrapper[4767]: W1124 21:51:52.153211 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069a00af_68eb_41b7_9bcf_5209562d25d8.slice/crio-ac6246cf844d4b93159ae922e7d226b600233f8796e345309f4559e7da491685 WatchSource:0}: Error finding container ac6246cf844d4b93159ae922e7d226b600233f8796e345309f4559e7da491685: Status 404 returned error can't find the container with id ac6246cf844d4b93159ae922e7d226b600233f8796e345309f4559e7da491685 Nov 24 21:51:52 crc kubenswrapper[4767]: I1124 21:51:52.689525 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sfphm" event={"ID":"069a00af-68eb-41b7-9bcf-5209562d25d8","Type":"ContainerStarted","Data":"ac6246cf844d4b93159ae922e7d226b600233f8796e345309f4559e7da491685"} Nov 24 21:51:54 crc kubenswrapper[4767]: I1124 21:51:54.712045 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sfphm" event={"ID":"069a00af-68eb-41b7-9bcf-5209562d25d8","Type":"ContainerStarted","Data":"8a850df77ae9d2bc9d7edfd0b288033db55b6599d91aaef30287436b7c7d4663"} Nov 24 21:51:54 crc kubenswrapper[4767]: I1124 21:51:54.741321 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sfphm" podStartSLOduration=1.501716519 podStartE2EDuration="3.741265292s" podCreationTimestamp="2025-11-24 21:51:51 +0000 UTC" firstStartedPulling="2025-11-24 21:51:52.154945841 +0000 UTC m=+795.071929203" lastFinishedPulling="2025-11-24 21:51:54.394494594 +0000 UTC m=+797.311477976" observedRunningTime="2025-11-24 21:51:54.73412213 +0000 UTC m=+797.651105542" watchObservedRunningTime="2025-11-24 21:51:54.741265292 +0000 UTC m=+797.658248684" Nov 24 21:51:56 crc kubenswrapper[4767]: I1124 21:51:56.436332 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7dw7f" Nov 24 21:51:56 crc kubenswrapper[4767]: I1124 21:51:56.544355 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-5z6rv" Nov 24 21:51:57 crc kubenswrapper[4767]: I1124 21:51:57.034399 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9vtpb" Nov 24 21:52:01 crc kubenswrapper[4767]: I1124 21:52:01.620631 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:52:01 crc kubenswrapper[4767]: I1124 21:52:01.621293 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:52:01 crc kubenswrapper[4767]: I1124 21:52:01.663141 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:52:01 crc kubenswrapper[4767]: I1124 21:52:01.807877 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sfphm" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.746643 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn"] Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.748743 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.752973 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sc2rz" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.764748 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn"] Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.885834 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-bundle\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.886392 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-util\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.886489 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jb6\" (UniqueName: \"kubernetes.io/projected/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-kube-api-access-v9jb6\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.988491 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-bundle\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.988621 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-util\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.988675 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jb6\" (UniqueName: \"kubernetes.io/projected/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-kube-api-access-v9jb6\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.989344 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-util\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:07 crc kubenswrapper[4767]: I1124 21:52:07.989562 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-bundle\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:08 crc kubenswrapper[4767]: I1124 21:52:08.020897 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jb6\" (UniqueName: \"kubernetes.io/projected/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-kube-api-access-v9jb6\") pod \"73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:08 crc kubenswrapper[4767]: I1124 21:52:08.086045 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:08 crc kubenswrapper[4767]: I1124 21:52:08.509631 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn"] Nov 24 21:52:08 crc kubenswrapper[4767]: I1124 21:52:08.822461 4767 generic.go:334] "Generic (PLEG): container finished" podID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerID="ec047a52adc2661e363d085f803355e29a6d5bd375066901b164c0855a62afa0" exitCode=0 Nov 24 21:52:08 crc kubenswrapper[4767]: I1124 21:52:08.822520 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" event={"ID":"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb","Type":"ContainerDied","Data":"ec047a52adc2661e363d085f803355e29a6d5bd375066901b164c0855a62afa0"} Nov 24 21:52:08 crc kubenswrapper[4767]: I1124 21:52:08.826524 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" event={"ID":"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb","Type":"ContainerStarted","Data":"f4bd05316f5fa967a81b38599963acfba1bb4e6460cfff745241bb391c8a0e78"} Nov 24 21:52:09 crc kubenswrapper[4767]: I1124 21:52:09.850346 4767 generic.go:334] "Generic (PLEG): container finished" podID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerID="b97f6e19efd17a3ad2eb00e149c38f5bfacbec96935c6ec1b9cee265c20d8ab5" exitCode=0 Nov 24 21:52:09 crc kubenswrapper[4767]: I1124 21:52:09.850480 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" event={"ID":"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb","Type":"ContainerDied","Data":"b97f6e19efd17a3ad2eb00e149c38f5bfacbec96935c6ec1b9cee265c20d8ab5"} Nov 24 21:52:10 crc kubenswrapper[4767]: I1124 21:52:10.863184 4767 generic.go:334] "Generic (PLEG): container finished" podID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerID="a84e631d3fbcd787e8097ab1c182538262cef184043ad5bf349bec7e1376efce" exitCode=0 Nov 24 21:52:10 crc kubenswrapper[4767]: I1124 21:52:10.863256 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" event={"ID":"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb","Type":"ContainerDied","Data":"a84e631d3fbcd787e8097ab1c182538262cef184043ad5bf349bec7e1376efce"} Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.254933 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.356363 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9jb6\" (UniqueName: \"kubernetes.io/projected/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-kube-api-access-v9jb6\") pod \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.356587 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-bundle\") pod \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.356652 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-util\") pod \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\" (UID: \"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb\") " Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.357524 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-bundle" (OuterVolumeSpecName: "bundle") pod "244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" (UID: "244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.362359 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-kube-api-access-v9jb6" (OuterVolumeSpecName: "kube-api-access-v9jb6") pod "244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" (UID: "244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb"). InnerVolumeSpecName "kube-api-access-v9jb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.386138 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-util" (OuterVolumeSpecName: "util") pod "244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" (UID: "244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.459174 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9jb6\" (UniqueName: \"kubernetes.io/projected/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-kube-api-access-v9jb6\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.459246 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.459346 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.887071 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" event={"ID":"244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb","Type":"ContainerDied","Data":"f4bd05316f5fa967a81b38599963acfba1bb4e6460cfff745241bb391c8a0e78"} Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.887117 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4bd05316f5fa967a81b38599963acfba1bb4e6460cfff745241bb391c8a0e78" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.887208 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.900623 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sqlm8"] Nov 24 21:52:12 crc kubenswrapper[4767]: E1124 21:52:12.901032 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerName="util" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.901060 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerName="util" Nov 24 21:52:12 crc kubenswrapper[4767]: E1124 21:52:12.901089 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerName="extract" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.901102 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerName="extract" Nov 24 21:52:12 crc kubenswrapper[4767]: E1124 21:52:12.901133 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerName="pull" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.901146 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerName="pull" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.901379 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb" containerName="extract" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.902927 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:12 crc kubenswrapper[4767]: I1124 21:52:12.913584 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqlm8"] Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.070570 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-catalog-content\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.070809 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcndw\" (UniqueName: \"kubernetes.io/projected/4fc8c9a9-cfe8-4829-bb76-63a85166f620-kube-api-access-hcndw\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.070865 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-utilities\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.171793 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-catalog-content\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.171900 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcndw\" (UniqueName: \"kubernetes.io/projected/4fc8c9a9-cfe8-4829-bb76-63a85166f620-kube-api-access-hcndw\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.171925 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-utilities\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.172446 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-catalog-content\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.172553 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-utilities\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.196693 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcndw\" (UniqueName: \"kubernetes.io/projected/4fc8c9a9-cfe8-4829-bb76-63a85166f620-kube-api-access-hcndw\") pod \"community-operators-sqlm8\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.269905 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:13 crc kubenswrapper[4767]: W1124 21:52:13.755692 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fc8c9a9_cfe8_4829_bb76_63a85166f620.slice/crio-004fefb64edc17aa974d04f5b835f774faf4b94b7fed80272a66fc540851da43 WatchSource:0}: Error finding container 004fefb64edc17aa974d04f5b835f774faf4b94b7fed80272a66fc540851da43: Status 404 returned error can't find the container with id 004fefb64edc17aa974d04f5b835f774faf4b94b7fed80272a66fc540851da43 Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.755846 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqlm8"] Nov 24 21:52:13 crc kubenswrapper[4767]: I1124 21:52:13.894042 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqlm8" event={"ID":"4fc8c9a9-cfe8-4829-bb76-63a85166f620","Type":"ContainerStarted","Data":"004fefb64edc17aa974d04f5b835f774faf4b94b7fed80272a66fc540851da43"} Nov 24 21:52:14 crc kubenswrapper[4767]: I1124 21:52:14.903564 4767 generic.go:334] "Generic (PLEG): container finished" podID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerID="d01104dbdd6922f02aeb0c64fcd9355f5a337fab65322cff28d4180153bd4f79" exitCode=0 Nov 24 21:52:14 crc kubenswrapper[4767]: I1124 21:52:14.903666 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqlm8" event={"ID":"4fc8c9a9-cfe8-4829-bb76-63a85166f620","Type":"ContainerDied","Data":"d01104dbdd6922f02aeb0c64fcd9355f5a337fab65322cff28d4180153bd4f79"} Nov 24 21:52:15 crc kubenswrapper[4767]: I1124 21:52:15.912192 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqlm8" event={"ID":"4fc8c9a9-cfe8-4829-bb76-63a85166f620","Type":"ContainerStarted","Data":"cfbf6fdbbd6a68aa3078bf9bd3e1087466515f2ced2f395dfa7e28fc3986d850"} Nov 24 21:52:16 crc kubenswrapper[4767]: I1124 21:52:16.919736 4767 generic.go:334] "Generic (PLEG): container finished" podID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerID="cfbf6fdbbd6a68aa3078bf9bd3e1087466515f2ced2f395dfa7e28fc3986d850" exitCode=0 Nov 24 21:52:16 crc kubenswrapper[4767]: I1124 21:52:16.919782 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqlm8" event={"ID":"4fc8c9a9-cfe8-4829-bb76-63a85166f620","Type":"ContainerDied","Data":"cfbf6fdbbd6a68aa3078bf9bd3e1087466515f2ced2f395dfa7e28fc3986d850"} Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.340209 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg"] Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.341184 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.343335 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68cz9" Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.376712 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg"] Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.431289 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkvp\" (UniqueName: \"kubernetes.io/projected/a5bf8969-1c9c-4141-bcc7-fcdb88508516-kube-api-access-btkvp\") pod \"openstack-operator-controller-operator-55d996bbb7-zpgcg\" (UID: \"a5bf8969-1c9c-4141-bcc7-fcdb88508516\") " pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.532416 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btkvp\" (UniqueName: \"kubernetes.io/projected/a5bf8969-1c9c-4141-bcc7-fcdb88508516-kube-api-access-btkvp\") pod \"openstack-operator-controller-operator-55d996bbb7-zpgcg\" (UID: \"a5bf8969-1c9c-4141-bcc7-fcdb88508516\") " pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.558339 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btkvp\" (UniqueName: \"kubernetes.io/projected/a5bf8969-1c9c-4141-bcc7-fcdb88508516-kube-api-access-btkvp\") pod \"openstack-operator-controller-operator-55d996bbb7-zpgcg\" (UID: \"a5bf8969-1c9c-4141-bcc7-fcdb88508516\") " pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.711126 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.935187 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqlm8" event={"ID":"4fc8c9a9-cfe8-4829-bb76-63a85166f620","Type":"ContainerStarted","Data":"2a3cdd4089eefb8417e2cf6eba19a5623afdae5e800caf9d78e70bc69b10cbd9"} Nov 24 21:52:17 crc kubenswrapper[4767]: I1124 21:52:17.957947 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sqlm8" podStartSLOduration=3.546687212 podStartE2EDuration="5.957926141s" podCreationTimestamp="2025-11-24 21:52:12 +0000 UTC" firstStartedPulling="2025-11-24 21:52:14.906474226 +0000 UTC m=+817.823457598" lastFinishedPulling="2025-11-24 21:52:17.317713145 +0000 UTC m=+820.234696527" observedRunningTime="2025-11-24 21:52:17.954015 +0000 UTC m=+820.870998402" watchObservedRunningTime="2025-11-24 21:52:17.957926141 +0000 UTC m=+820.874909523" Nov 24 21:52:18 crc kubenswrapper[4767]: I1124 21:52:18.230814 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg"] Nov 24 21:52:18 crc kubenswrapper[4767]: W1124 21:52:18.246259 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5bf8969_1c9c_4141_bcc7_fcdb88508516.slice/crio-80e4d567cacada05c3cd41b5be96d72e772465e531877bd744d823adbd9c5a01 WatchSource:0}: Error finding container 80e4d567cacada05c3cd41b5be96d72e772465e531877bd744d823adbd9c5a01: Status 404 returned error can't find the container with id 80e4d567cacada05c3cd41b5be96d72e772465e531877bd744d823adbd9c5a01 Nov 24 21:52:18 crc kubenswrapper[4767]: I1124 21:52:18.943155 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" event={"ID":"a5bf8969-1c9c-4141-bcc7-fcdb88508516","Type":"ContainerStarted","Data":"80e4d567cacada05c3cd41b5be96d72e772465e531877bd744d823adbd9c5a01"} Nov 24 21:52:22 crc kubenswrapper[4767]: I1124 21:52:22.972730 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" event={"ID":"a5bf8969-1c9c-4141-bcc7-fcdb88508516","Type":"ContainerStarted","Data":"745c6237d61d0e0eb39dedd661689065e77688bf2b65ecd97af97860d0650299"} Nov 24 21:52:22 crc kubenswrapper[4767]: I1124 21:52:22.973153 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.009618 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" podStartSLOduration=1.862606302 podStartE2EDuration="6.009593435s" podCreationTimestamp="2025-11-24 21:52:17 +0000 UTC" firstStartedPulling="2025-11-24 21:52:18.248859846 +0000 UTC m=+821.165843219" lastFinishedPulling="2025-11-24 21:52:22.39584696 +0000 UTC m=+825.312830352" observedRunningTime="2025-11-24 21:52:23.006879259 +0000 UTC m=+825.923862661" watchObservedRunningTime="2025-11-24 21:52:23.009593435 +0000 UTC m=+825.926576837" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.089559 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lb459"] Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.090955 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.106329 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb459"] Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.212910 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscbc\" (UniqueName: \"kubernetes.io/projected/b776b6e7-aaaa-46ec-8088-ce0b0071d739-kube-api-access-kscbc\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.212964 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-catalog-content\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.213085 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-utilities\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.270066 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.270131 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.313983 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-utilities\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.314060 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscbc\" (UniqueName: \"kubernetes.io/projected/b776b6e7-aaaa-46ec-8088-ce0b0071d739-kube-api-access-kscbc\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.314092 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-catalog-content\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.314817 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-catalog-content\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.314891 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-utilities\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.334926 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.363571 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscbc\" (UniqueName: \"kubernetes.io/projected/b776b6e7-aaaa-46ec-8088-ce0b0071d739-kube-api-access-kscbc\") pod \"redhat-marketplace-lb459\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.421068 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.859134 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb459"] Nov 24 21:52:23 crc kubenswrapper[4767]: W1124 21:52:23.862203 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb776b6e7_aaaa_46ec_8088_ce0b0071d739.slice/crio-dcfa49af8e3f1717a77ac43db2a82c1570b27abb3738584777999dd6cdc42f5e WatchSource:0}: Error finding container dcfa49af8e3f1717a77ac43db2a82c1570b27abb3738584777999dd6cdc42f5e: Status 404 returned error can't find the container with id dcfa49af8e3f1717a77ac43db2a82c1570b27abb3738584777999dd6cdc42f5e Nov 24 21:52:23 crc kubenswrapper[4767]: I1124 21:52:23.981132 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb459" event={"ID":"b776b6e7-aaaa-46ec-8088-ce0b0071d739","Type":"ContainerStarted","Data":"dcfa49af8e3f1717a77ac43db2a82c1570b27abb3738584777999dd6cdc42f5e"} Nov 24 21:52:24 crc kubenswrapper[4767]: I1124 21:52:24.038356 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:24 crc kubenswrapper[4767]: I1124 21:52:24.994795 4767 generic.go:334] "Generic (PLEG): container finished" podID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerID="78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f" exitCode=0 Nov 24 21:52:24 crc kubenswrapper[4767]: I1124 21:52:24.994863 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb459" event={"ID":"b776b6e7-aaaa-46ec-8088-ce0b0071d739","Type":"ContainerDied","Data":"78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f"} Nov 24 21:52:26 crc kubenswrapper[4767]: I1124 21:52:26.003394 4767 generic.go:334] "Generic (PLEG): container finished" podID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerID="5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406" exitCode=0 Nov 24 21:52:26 crc kubenswrapper[4767]: I1124 21:52:26.003736 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb459" event={"ID":"b776b6e7-aaaa-46ec-8088-ce0b0071d739","Type":"ContainerDied","Data":"5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406"} Nov 24 21:52:26 crc kubenswrapper[4767]: I1124 21:52:26.881193 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqlm8"] Nov 24 21:52:26 crc kubenswrapper[4767]: I1124 21:52:26.881744 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sqlm8" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="registry-server" containerID="cri-o://2a3cdd4089eefb8417e2cf6eba19a5623afdae5e800caf9d78e70bc69b10cbd9" gracePeriod=2 Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.023718 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb459" event={"ID":"b776b6e7-aaaa-46ec-8088-ce0b0071d739","Type":"ContainerStarted","Data":"bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5"} Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.025859 4767 generic.go:334] "Generic (PLEG): container finished" podID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerID="2a3cdd4089eefb8417e2cf6eba19a5623afdae5e800caf9d78e70bc69b10cbd9" exitCode=0 Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.025901 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqlm8" event={"ID":"4fc8c9a9-cfe8-4829-bb76-63a85166f620","Type":"ContainerDied","Data":"2a3cdd4089eefb8417e2cf6eba19a5623afdae5e800caf9d78e70bc69b10cbd9"} Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.048310 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lb459" podStartSLOduration=2.552364124 podStartE2EDuration="4.048288531s" podCreationTimestamp="2025-11-24 21:52:23 +0000 UTC" firstStartedPulling="2025-11-24 21:52:24.997476907 +0000 UTC m=+827.914460319" lastFinishedPulling="2025-11-24 21:52:26.493401304 +0000 UTC m=+829.410384726" observedRunningTime="2025-11-24 21:52:27.041223311 +0000 UTC m=+829.958206683" watchObservedRunningTime="2025-11-24 21:52:27.048288531 +0000 UTC m=+829.965271913" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.346564 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.473507 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcndw\" (UniqueName: \"kubernetes.io/projected/4fc8c9a9-cfe8-4829-bb76-63a85166f620-kube-api-access-hcndw\") pod \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.473638 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-utilities\") pod \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.473684 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-catalog-content\") pod \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\" (UID: \"4fc8c9a9-cfe8-4829-bb76-63a85166f620\") " Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.474456 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-utilities" (OuterVolumeSpecName: "utilities") pod "4fc8c9a9-cfe8-4829-bb76-63a85166f620" (UID: "4fc8c9a9-cfe8-4829-bb76-63a85166f620"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.480851 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc8c9a9-cfe8-4829-bb76-63a85166f620-kube-api-access-hcndw" (OuterVolumeSpecName: "kube-api-access-hcndw") pod "4fc8c9a9-cfe8-4829-bb76-63a85166f620" (UID: "4fc8c9a9-cfe8-4829-bb76-63a85166f620"). InnerVolumeSpecName "kube-api-access-hcndw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.544855 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fc8c9a9-cfe8-4829-bb76-63a85166f620" (UID: "4fc8c9a9-cfe8-4829-bb76-63a85166f620"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.574608 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcndw\" (UniqueName: \"kubernetes.io/projected/4fc8c9a9-cfe8-4829-bb76-63a85166f620-kube-api-access-hcndw\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.574643 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.574654 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fc8c9a9-cfe8-4829-bb76-63a85166f620-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:27 crc kubenswrapper[4767]: I1124 21:52:27.713961 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-55d996bbb7-zpgcg" Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.033925 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqlm8" Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.033915 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqlm8" event={"ID":"4fc8c9a9-cfe8-4829-bb76-63a85166f620","Type":"ContainerDied","Data":"004fefb64edc17aa974d04f5b835f774faf4b94b7fed80272a66fc540851da43"} Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.034370 4767 scope.go:117] "RemoveContainer" containerID="2a3cdd4089eefb8417e2cf6eba19a5623afdae5e800caf9d78e70bc69b10cbd9" Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.051477 4767 scope.go:117] "RemoveContainer" containerID="cfbf6fdbbd6a68aa3078bf9bd3e1087466515f2ced2f395dfa7e28fc3986d850" Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.059910 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqlm8"] Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.075315 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sqlm8"] Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.089589 4767 scope.go:117] "RemoveContainer" containerID="d01104dbdd6922f02aeb0c64fcd9355f5a337fab65322cff28d4180153bd4f79" Nov 24 21:52:28 crc kubenswrapper[4767]: I1124 21:52:28.331035 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" path="/var/lib/kubelet/pods/4fc8c9a9-cfe8-4829-bb76-63a85166f620/volumes" Nov 24 21:52:33 crc kubenswrapper[4767]: I1124 21:52:33.421297 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:33 crc kubenswrapper[4767]: I1124 21:52:33.421750 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:33 crc kubenswrapper[4767]: I1124 21:52:33.474736 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:34 crc kubenswrapper[4767]: I1124 21:52:34.125567 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:34 crc kubenswrapper[4767]: I1124 21:52:34.282838 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb459"] Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.101872 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lb459" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="registry-server" containerID="cri-o://bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5" gracePeriod=2 Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.503960 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.697394 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscbc\" (UniqueName: \"kubernetes.io/projected/b776b6e7-aaaa-46ec-8088-ce0b0071d739-kube-api-access-kscbc\") pod \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.697485 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-catalog-content\") pod \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.697521 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-utilities\") pod \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\" (UID: \"b776b6e7-aaaa-46ec-8088-ce0b0071d739\") " Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.699167 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-utilities" (OuterVolumeSpecName: "utilities") pod "b776b6e7-aaaa-46ec-8088-ce0b0071d739" (UID: "b776b6e7-aaaa-46ec-8088-ce0b0071d739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.703215 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b776b6e7-aaaa-46ec-8088-ce0b0071d739-kube-api-access-kscbc" (OuterVolumeSpecName: "kube-api-access-kscbc") pod "b776b6e7-aaaa-46ec-8088-ce0b0071d739" (UID: "b776b6e7-aaaa-46ec-8088-ce0b0071d739"). InnerVolumeSpecName "kube-api-access-kscbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.731130 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b776b6e7-aaaa-46ec-8088-ce0b0071d739" (UID: "b776b6e7-aaaa-46ec-8088-ce0b0071d739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.799477 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kscbc\" (UniqueName: \"kubernetes.io/projected/b776b6e7-aaaa-46ec-8088-ce0b0071d739-kube-api-access-kscbc\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.799516 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:36 crc kubenswrapper[4767]: I1124 21:52:36.799529 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b776b6e7-aaaa-46ec-8088-ce0b0071d739-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.109336 4767 generic.go:334] "Generic (PLEG): container finished" podID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerID="bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5" exitCode=0 Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.109421 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb459" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.109444 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb459" event={"ID":"b776b6e7-aaaa-46ec-8088-ce0b0071d739","Type":"ContainerDied","Data":"bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5"} Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.110595 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb459" event={"ID":"b776b6e7-aaaa-46ec-8088-ce0b0071d739","Type":"ContainerDied","Data":"dcfa49af8e3f1717a77ac43db2a82c1570b27abb3738584777999dd6cdc42f5e"} Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.110621 4767 scope.go:117] "RemoveContainer" containerID="bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.139222 4767 scope.go:117] "RemoveContainer" containerID="5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.187068 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb459"] Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.193637 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb459"] Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.202861 4767 scope.go:117] "RemoveContainer" containerID="78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.225088 4767 scope.go:117] "RemoveContainer" containerID="bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5" Nov 24 21:52:37 crc kubenswrapper[4767]: E1124 21:52:37.225569 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5\": container with ID starting with bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5 not found: ID does not exist" containerID="bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.225613 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5"} err="failed to get container status \"bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5\": rpc error: code = NotFound desc = could not find container \"bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5\": container with ID starting with bec024ff9fd36607ea1586a48dd902934e8c93d8b45076c0aad1aa564120d0a5 not found: ID does not exist" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.225646 4767 scope.go:117] "RemoveContainer" containerID="5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406" Nov 24 21:52:37 crc kubenswrapper[4767]: E1124 21:52:37.231701 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406\": container with ID starting with 5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406 not found: ID does not exist" containerID="5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.231744 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406"} err="failed to get container status \"5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406\": rpc error: code = NotFound desc = could not find container \"5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406\": container with ID starting with 5e46d0a1d0ea4c63a5c094a0afb0e9f74d874484ed8daedf4f06ab6196eda406 not found: ID does not exist" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.231770 4767 scope.go:117] "RemoveContainer" containerID="78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f" Nov 24 21:52:37 crc kubenswrapper[4767]: E1124 21:52:37.232056 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f\": container with ID starting with 78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f not found: ID does not exist" containerID="78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f" Nov 24 21:52:37 crc kubenswrapper[4767]: I1124 21:52:37.232097 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f"} err="failed to get container status \"78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f\": rpc error: code = NotFound desc = could not find container \"78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f\": container with ID starting with 78b1d44d81eb6398a9ab1a191ed122fdd6aa145b67fc27015c0f9f709e37ee7f not found: ID does not exist" Nov 24 21:52:38 crc kubenswrapper[4767]: I1124 21:52:38.327551 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" path="/var/lib/kubelet/pods/b776b6e7-aaaa-46ec-8088-ce0b0071d739/volumes" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.045833 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gwdqz"] Nov 24 21:52:42 crc kubenswrapper[4767]: E1124 21:52:42.046491 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="extract-utilities" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046502 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="extract-utilities" Nov 24 21:52:42 crc kubenswrapper[4767]: E1124 21:52:42.046519 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="extract-content" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046525 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="extract-content" Nov 24 21:52:42 crc kubenswrapper[4767]: E1124 21:52:42.046541 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="registry-server" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046549 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="registry-server" Nov 24 21:52:42 crc kubenswrapper[4767]: E1124 21:52:42.046556 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="extract-utilities" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046578 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="extract-utilities" Nov 24 21:52:42 crc kubenswrapper[4767]: E1124 21:52:42.046588 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="registry-server" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046593 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="registry-server" Nov 24 21:52:42 crc kubenswrapper[4767]: E1124 21:52:42.046600 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="extract-content" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046606 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="extract-content" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046700 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc8c9a9-cfe8-4829-bb76-63a85166f620" containerName="registry-server" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.046712 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b776b6e7-aaaa-46ec-8088-ce0b0071d739" containerName="registry-server" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.047518 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.064476 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gwdqz"] Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.066745 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-catalog-content\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.066793 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg4jt\" (UniqueName: \"kubernetes.io/projected/76272833-44ed-4e2f-b20f-1479146df875-kube-api-access-mg4jt\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.066840 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-utilities\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.168381 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-utilities\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.168449 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-catalog-content\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.168484 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg4jt\" (UniqueName: \"kubernetes.io/projected/76272833-44ed-4e2f-b20f-1479146df875-kube-api-access-mg4jt\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.168992 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-utilities\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.168992 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-catalog-content\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.204374 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg4jt\" (UniqueName: \"kubernetes.io/projected/76272833-44ed-4e2f-b20f-1479146df875-kube-api-access-mg4jt\") pod \"certified-operators-gwdqz\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.367824 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:42 crc kubenswrapper[4767]: I1124 21:52:42.834832 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gwdqz"] Nov 24 21:52:43 crc kubenswrapper[4767]: I1124 21:52:43.166381 4767 generic.go:334] "Generic (PLEG): container finished" podID="76272833-44ed-4e2f-b20f-1479146df875" containerID="fabe81467ba63f27af06164b2b9df860ec778ec595ebba341ca7207ba3577f98" exitCode=0 Nov 24 21:52:43 crc kubenswrapper[4767]: I1124 21:52:43.166464 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gwdqz" event={"ID":"76272833-44ed-4e2f-b20f-1479146df875","Type":"ContainerDied","Data":"fabe81467ba63f27af06164b2b9df860ec778ec595ebba341ca7207ba3577f98"} Nov 24 21:52:43 crc kubenswrapper[4767]: I1124 21:52:43.167057 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gwdqz" event={"ID":"76272833-44ed-4e2f-b20f-1479146df875","Type":"ContainerStarted","Data":"2373e750fdf0d5119f38118b8de7133fbd87083a6e7953faaf7c819ba6df04f1"} Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.176110 4767 generic.go:334] "Generic (PLEG): container finished" podID="76272833-44ed-4e2f-b20f-1479146df875" containerID="edf2f4ecabe9473ce800b146a1962220cf956873e8b1e516d4d8dcabdbe75501" exitCode=0 Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.176156 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gwdqz" event={"ID":"76272833-44ed-4e2f-b20f-1479146df875","Type":"ContainerDied","Data":"edf2f4ecabe9473ce800b146a1962220cf956873e8b1e516d4d8dcabdbe75501"} Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.948447 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87"] Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.949560 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.951110 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-68nvq" Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.954355 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7"] Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.955217 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.957545 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6pxq9" Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.963241 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87"] Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.979086 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7"] Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.995355 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw"] Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.996389 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" Nov 24 21:52:44 crc kubenswrapper[4767]: I1124 21:52:44.998730 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-t7m6l" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.002867 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.003939 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.009741 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rc6fp" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.012812 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.018990 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkrhm\" (UniqueName: \"kubernetes.io/projected/5abc7b42-2e06-4722-b3e4-aab9de868251-kube-api-access-xkrhm\") pod \"designate-operator-controller-manager-7d695c9b56-c42hw\" (UID: \"5abc7b42-2e06-4722-b3e4-aab9de868251\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.019206 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64fh\" (UniqueName: \"kubernetes.io/projected/44564b48-f353-4b3f-a0b7-b42ecd1bf838-kube-api-access-v64fh\") pod \"barbican-operator-controller-manager-86dc4d89c8-z6h87\" (UID: \"44564b48-f353-4b3f-a0b7-b42ecd1bf838\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.019331 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfs9r\" (UniqueName: \"kubernetes.io/projected/1cb193ac-a6d0-4981-91b8-234d77ab2cd7-kube-api-access-sfs9r\") pod \"cinder-operator-controller-manager-79856dc55c-hxzx7\" (UID: \"1cb193ac-a6d0-4981-91b8-234d77ab2cd7\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.019369 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzw45\" (UniqueName: \"kubernetes.io/projected/e8cfe9d6-3aba-44af-9dbc-679d34dc98d0-kube-api-access-pzw45\") pod \"glance-operator-controller-manager-68b95954c9-ns9km\" (UID: \"e8cfe9d6-3aba-44af-9dbc-679d34dc98d0\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.027566 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.040889 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.041865 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.051925 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-z5lpp" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.060174 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.073708 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.076413 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.077843 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.082145 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jfvg9" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.097865 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.098871 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.107833 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4gsdl" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.108059 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124562 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124618 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btdt\" (UniqueName: \"kubernetes.io/projected/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-kube-api-access-7btdt\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124649 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkrhm\" (UniqueName: \"kubernetes.io/projected/5abc7b42-2e06-4722-b3e4-aab9de868251-kube-api-access-xkrhm\") pod \"designate-operator-controller-manager-7d695c9b56-c42hw\" (UID: \"5abc7b42-2e06-4722-b3e4-aab9de868251\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124669 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v64fh\" (UniqueName: \"kubernetes.io/projected/44564b48-f353-4b3f-a0b7-b42ecd1bf838-kube-api-access-v64fh\") pod \"barbican-operator-controller-manager-86dc4d89c8-z6h87\" (UID: \"44564b48-f353-4b3f-a0b7-b42ecd1bf838\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124771 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8mn9\" (UniqueName: \"kubernetes.io/projected/945744e6-8179-45cb-a020-de9b73fa89a1-kube-api-access-n8mn9\") pod \"heat-operator-controller-manager-774b86978c-wtwkg\" (UID: \"945744e6-8179-45cb-a020-de9b73fa89a1\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124833 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfs9r\" (UniqueName: \"kubernetes.io/projected/1cb193ac-a6d0-4981-91b8-234d77ab2cd7-kube-api-access-sfs9r\") pod \"cinder-operator-controller-manager-79856dc55c-hxzx7\" (UID: \"1cb193ac-a6d0-4981-91b8-234d77ab2cd7\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124885 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzw45\" (UniqueName: \"kubernetes.io/projected/e8cfe9d6-3aba-44af-9dbc-679d34dc98d0-kube-api-access-pzw45\") pod \"glance-operator-controller-manager-68b95954c9-ns9km\" (UID: \"e8cfe9d6-3aba-44af-9dbc-679d34dc98d0\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.124936 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrglx\" (UniqueName: \"kubernetes.io/projected/78ad5af3-1937-484b-bd41-9a7ac9d09db3-kube-api-access-lrglx\") pod \"horizon-operator-controller-manager-68c9694994-7pth6\" (UID: \"78ad5af3-1937-484b-bd41-9a7ac9d09db3\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.130575 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.131821 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.134635 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h86d6" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.152145 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.158225 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v64fh\" (UniqueName: \"kubernetes.io/projected/44564b48-f353-4b3f-a0b7-b42ecd1bf838-kube-api-access-v64fh\") pod \"barbican-operator-controller-manager-86dc4d89c8-z6h87\" (UID: \"44564b48-f353-4b3f-a0b7-b42ecd1bf838\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.159404 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkrhm\" (UniqueName: \"kubernetes.io/projected/5abc7b42-2e06-4722-b3e4-aab9de868251-kube-api-access-xkrhm\") pod \"designate-operator-controller-manager-7d695c9b56-c42hw\" (UID: \"5abc7b42-2e06-4722-b3e4-aab9de868251\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.186710 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfs9r\" (UniqueName: \"kubernetes.io/projected/1cb193ac-a6d0-4981-91b8-234d77ab2cd7-kube-api-access-sfs9r\") pod \"cinder-operator-controller-manager-79856dc55c-hxzx7\" (UID: \"1cb193ac-a6d0-4981-91b8-234d77ab2cd7\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.180911 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzw45\" (UniqueName: \"kubernetes.io/projected/e8cfe9d6-3aba-44af-9dbc-679d34dc98d0-kube-api-access-pzw45\") pod \"glance-operator-controller-manager-68b95954c9-ns9km\" (UID: \"e8cfe9d6-3aba-44af-9dbc-679d34dc98d0\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.204678 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.211065 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.215681 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.224290 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gzhst" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.229884 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7btdt\" (UniqueName: \"kubernetes.io/projected/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-kube-api-access-7btdt\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.229963 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8mn9\" (UniqueName: \"kubernetes.io/projected/945744e6-8179-45cb-a020-de9b73fa89a1-kube-api-access-n8mn9\") pod \"heat-operator-controller-manager-774b86978c-wtwkg\" (UID: \"945744e6-8179-45cb-a020-de9b73fa89a1\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.230009 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkhd\" (UniqueName: \"kubernetes.io/projected/45530d57-164d-48f7-89e1-0a0f85ccb029-kube-api-access-zzkhd\") pod \"keystone-operator-controller-manager-748dc6576f-zdlcp\" (UID: \"45530d57-164d-48f7-89e1-0a0f85ccb029\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.230039 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgpk\" (UniqueName: \"kubernetes.io/projected/a5a1f537-9c37-40a5-9f2f-a9ec762ca458-kube-api-access-5lgpk\") pod \"ironic-operator-controller-manager-5bfcdc958c-rmwtn\" (UID: \"a5a1f537-9c37-40a5-9f2f-a9ec762ca458\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.230060 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrglx\" (UniqueName: \"kubernetes.io/projected/78ad5af3-1937-484b-bd41-9a7ac9d09db3-kube-api-access-lrglx\") pod \"horizon-operator-controller-manager-68c9694994-7pth6\" (UID: \"78ad5af3-1937-484b-bd41-9a7ac9d09db3\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.230093 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:45 crc kubenswrapper[4767]: E1124 21:52:45.230333 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:52:45 crc kubenswrapper[4767]: E1124 21:52:45.230392 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert podName:7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8 nodeName:}" failed. No retries permitted until 2025-11-24 21:52:45.730368118 +0000 UTC m=+848.647351480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert") pod "infra-operator-controller-manager-d5cc86f4b-wln68" (UID: "7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.232474 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gwdqz" event={"ID":"76272833-44ed-4e2f-b20f-1479146df875","Type":"ContainerStarted","Data":"041597e2091ee835339a2a86e7c56222e3e686fe2852a4c0ba56c47e81b6d14d"} Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.255308 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.261588 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.262755 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.266330 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-h98l9" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.270441 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.273244 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.274651 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.278806 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrglx\" (UniqueName: \"kubernetes.io/projected/78ad5af3-1937-484b-bd41-9a7ac9d09db3-kube-api-access-lrglx\") pod \"horizon-operator-controller-manager-68c9694994-7pth6\" (UID: \"78ad5af3-1937-484b-bd41-9a7ac9d09db3\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.279071 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8mn9\" (UniqueName: \"kubernetes.io/projected/945744e6-8179-45cb-a020-de9b73fa89a1-kube-api-access-n8mn9\") pod \"heat-operator-controller-manager-774b86978c-wtwkg\" (UID: \"945744e6-8179-45cb-a020-de9b73fa89a1\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.302633 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.302807 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btdt\" (UniqueName: \"kubernetes.io/projected/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-kube-api-access-7btdt\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.303814 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.312098 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.316374 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.319115 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5tffn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.319515 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.326497 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.331307 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dlmdm" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.331694 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk78l\" (UniqueName: \"kubernetes.io/projected/c4266ab7-4886-4015-9a87-6454fc59e9c5-kube-api-access-lk78l\") pod \"manila-operator-controller-manager-58bb8d67cc-9c4l8\" (UID: \"c4266ab7-4886-4015-9a87-6454fc59e9c5\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.331745 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzkhd\" (UniqueName: \"kubernetes.io/projected/45530d57-164d-48f7-89e1-0a0f85ccb029-kube-api-access-zzkhd\") pod \"keystone-operator-controller-manager-748dc6576f-zdlcp\" (UID: \"45530d57-164d-48f7-89e1-0a0f85ccb029\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.331777 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgpk\" (UniqueName: \"kubernetes.io/projected/a5a1f537-9c37-40a5-9f2f-a9ec762ca458-kube-api-access-5lgpk\") pod \"ironic-operator-controller-manager-5bfcdc958c-rmwtn\" (UID: \"a5a1f537-9c37-40a5-9f2f-a9ec762ca458\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.331839 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knfn2\" (UniqueName: \"kubernetes.io/projected/acb0f017-b32b-4d0a-98b5-bd8d4db084ea-kube-api-access-knfn2\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-ghpkb\" (UID: \"acb0f017-b32b-4d0a-98b5-bd8d4db084ea\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.333328 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.334614 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.343782 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kns64" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.349367 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.358800 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.359822 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.363298 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.365931 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzkhd\" (UniqueName: \"kubernetes.io/projected/45530d57-164d-48f7-89e1-0a0f85ccb029-kube-api-access-zzkhd\") pod \"keystone-operator-controller-manager-748dc6576f-zdlcp\" (UID: \"45530d57-164d-48f7-89e1-0a0f85ccb029\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.367164 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgpk\" (UniqueName: \"kubernetes.io/projected/a5a1f537-9c37-40a5-9f2f-a9ec762ca458-kube-api-access-5lgpk\") pod \"ironic-operator-controller-manager-5bfcdc958c-rmwtn\" (UID: \"a5a1f537-9c37-40a5-9f2f-a9ec762ca458\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.367482 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.369239 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wtwhc" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.398658 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.405422 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.419017 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.420148 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.422342 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.426194 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-trlhn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.429953 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.435367 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knfn2\" (UniqueName: \"kubernetes.io/projected/acb0f017-b32b-4d0a-98b5-bd8d4db084ea-kube-api-access-knfn2\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-ghpkb\" (UID: \"acb0f017-b32b-4d0a-98b5-bd8d4db084ea\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.436459 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7l9v\" (UniqueName: \"kubernetes.io/projected/97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8-kube-api-access-f7l9v\") pod \"neutron-operator-controller-manager-7c57c8bbc4-wmpbx\" (UID: \"97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.436512 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk78l\" (UniqueName: \"kubernetes.io/projected/c4266ab7-4886-4015-9a87-6454fc59e9c5-kube-api-access-lk78l\") pod \"manila-operator-controller-manager-58bb8d67cc-9c4l8\" (UID: \"c4266ab7-4886-4015-9a87-6454fc59e9c5\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.439987 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.456443 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.458123 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.459118 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.459451 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.461178 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hlmns" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.461425 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jtb7g" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.463097 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.463920 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knfn2\" (UniqueName: \"kubernetes.io/projected/acb0f017-b32b-4d0a-98b5-bd8d4db084ea-kube-api-access-knfn2\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-ghpkb\" (UID: \"acb0f017-b32b-4d0a-98b5-bd8d4db084ea\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.465809 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk78l\" (UniqueName: \"kubernetes.io/projected/c4266ab7-4886-4015-9a87-6454fc59e9c5-kube-api-access-lk78l\") pod \"manila-operator-controller-manager-58bb8d67cc-9c4l8\" (UID: \"c4266ab7-4886-4015-9a87-6454fc59e9c5\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.483929 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.489106 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.494367 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.495372 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.498402 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g9x9x" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.500228 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.504728 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.507587 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jbhsq" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.510534 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.511451 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gwdqz" podStartSLOduration=2.115361545 podStartE2EDuration="3.511433614s" podCreationTimestamp="2025-11-24 21:52:42 +0000 UTC" firstStartedPulling="2025-11-24 21:52:43.169086034 +0000 UTC m=+846.086069406" lastFinishedPulling="2025-11-24 21:52:44.565158103 +0000 UTC m=+847.482141475" observedRunningTime="2025-11-24 21:52:45.382931972 +0000 UTC m=+848.299915344" watchObservedRunningTime="2025-11-24 21:52:45.511433614 +0000 UTC m=+848.428416986" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.523255 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539501 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b9sl\" (UniqueName: \"kubernetes.io/projected/fb5e8630-50f8-4d2c-a77a-d23b6441386a-kube-api-access-7b9sl\") pod \"placement-operator-controller-manager-5db546f9d9-7v5f9\" (UID: \"fb5e8630-50f8-4d2c-a77a-d23b6441386a\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539573 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fkq\" (UniqueName: \"kubernetes.io/projected/ea0b61d0-e20f-40eb-a3a8-329ff271f057-kube-api-access-x7fkq\") pod \"ovn-operator-controller-manager-66cf5c67ff-g7fnm\" (UID: \"ea0b61d0-e20f-40eb-a3a8-329ff271f057\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539628 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539649 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7l9v\" (UniqueName: \"kubernetes.io/projected/97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8-kube-api-access-f7l9v\") pod \"neutron-operator-controller-manager-7c57c8bbc4-wmpbx\" (UID: \"97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539713 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tc5\" (UniqueName: \"kubernetes.io/projected/bde0dfef-808a-4851-81a8-968847586652-kube-api-access-l6tc5\") pod \"octavia-operator-controller-manager-fd75fd47d-vdr7z\" (UID: \"bde0dfef-808a-4851-81a8-968847586652\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539732 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5rn7\" (UniqueName: \"kubernetes.io/projected/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-kube-api-access-x5rn7\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539786 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qm7n\" (UniqueName: \"kubernetes.io/projected/b7220fb1-add2-490e-9a22-09ca48f0de97-kube-api-access-2qm7n\") pod \"nova-operator-controller-manager-79556f57fc-jjk4x\" (UID: \"b7220fb1-add2-490e-9a22-09ca48f0de97\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539803 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnz47\" (UniqueName: \"kubernetes.io/projected/0266992d-7010-4fa3-9a94-2a7ab457f4ca-kube-api-access-fnz47\") pod \"swift-operator-controller-manager-6fdc4fcf86-7bvtn\" (UID: \"0266992d-7010-4fa3-9a94-2a7ab457f4ca\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.539825 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fcr\" (UniqueName: \"kubernetes.io/projected/aa98c97b-2d21-481f-9ddf-3e5adce9f626-kube-api-access-n4fcr\") pod \"telemetry-operator-controller-manager-567f98c9d-hpv5b\" (UID: \"aa98c97b-2d21-481f-9ddf-3e5adce9f626\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.572650 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7l9v\" (UniqueName: \"kubernetes.io/projected/97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8-kube-api-access-f7l9v\") pod \"neutron-operator-controller-manager-7c57c8bbc4-wmpbx\" (UID: \"97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.573542 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.574935 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.577159 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-q7xw7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.582015 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.630112 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644259 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tc5\" (UniqueName: \"kubernetes.io/projected/bde0dfef-808a-4851-81a8-968847586652-kube-api-access-l6tc5\") pod \"octavia-operator-controller-manager-fd75fd47d-vdr7z\" (UID: \"bde0dfef-808a-4851-81a8-968847586652\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644315 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5rn7\" (UniqueName: \"kubernetes.io/projected/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-kube-api-access-x5rn7\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644351 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qm7n\" (UniqueName: \"kubernetes.io/projected/b7220fb1-add2-490e-9a22-09ca48f0de97-kube-api-access-2qm7n\") pod \"nova-operator-controller-manager-79556f57fc-jjk4x\" (UID: \"b7220fb1-add2-490e-9a22-09ca48f0de97\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644371 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnz47\" (UniqueName: \"kubernetes.io/projected/0266992d-7010-4fa3-9a94-2a7ab457f4ca-kube-api-access-fnz47\") pod \"swift-operator-controller-manager-6fdc4fcf86-7bvtn\" (UID: \"0266992d-7010-4fa3-9a94-2a7ab457f4ca\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644393 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4fcr\" (UniqueName: \"kubernetes.io/projected/aa98c97b-2d21-481f-9ddf-3e5adce9f626-kube-api-access-n4fcr\") pod \"telemetry-operator-controller-manager-567f98c9d-hpv5b\" (UID: \"aa98c97b-2d21-481f-9ddf-3e5adce9f626\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644454 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b9sl\" (UniqueName: \"kubernetes.io/projected/fb5e8630-50f8-4d2c-a77a-d23b6441386a-kube-api-access-7b9sl\") pod \"placement-operator-controller-manager-5db546f9d9-7v5f9\" (UID: \"fb5e8630-50f8-4d2c-a77a-d23b6441386a\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644483 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7fkq\" (UniqueName: \"kubernetes.io/projected/ea0b61d0-e20f-40eb-a3a8-329ff271f057-kube-api-access-x7fkq\") pod \"ovn-operator-controller-manager-66cf5c67ff-g7fnm\" (UID: \"ea0b61d0-e20f-40eb-a3a8-329ff271f057\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.644513 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:45 crc kubenswrapper[4767]: E1124 21:52:45.644878 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:52:45 crc kubenswrapper[4767]: E1124 21:52:45.644956 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert podName:1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d nodeName:}" failed. No retries permitted until 2025-11-24 21:52:46.144935188 +0000 UTC m=+849.061918560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" (UID: "1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.683876 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.716127 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7fkq\" (UniqueName: \"kubernetes.io/projected/ea0b61d0-e20f-40eb-a3a8-329ff271f057-kube-api-access-x7fkq\") pod \"ovn-operator-controller-manager-66cf5c67ff-g7fnm\" (UID: \"ea0b61d0-e20f-40eb-a3a8-329ff271f057\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.724754 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.752549 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.757743 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.760119 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tc5\" (UniqueName: \"kubernetes.io/projected/bde0dfef-808a-4851-81a8-968847586652-kube-api-access-l6tc5\") pod \"octavia-operator-controller-manager-fd75fd47d-vdr7z\" (UID: \"bde0dfef-808a-4851-81a8-968847586652\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.760868 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnz47\" (UniqueName: \"kubernetes.io/projected/0266992d-7010-4fa3-9a94-2a7ab457f4ca-kube-api-access-fnz47\") pod \"swift-operator-controller-manager-6fdc4fcf86-7bvtn\" (UID: \"0266992d-7010-4fa3-9a94-2a7ab457f4ca\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.761111 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b9sl\" (UniqueName: \"kubernetes.io/projected/fb5e8630-50f8-4d2c-a77a-d23b6441386a-kube-api-access-7b9sl\") pod \"placement-operator-controller-manager-5db546f9d9-7v5f9\" (UID: \"fb5e8630-50f8-4d2c-a77a-d23b6441386a\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.761288 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qm7n\" (UniqueName: \"kubernetes.io/projected/b7220fb1-add2-490e-9a22-09ca48f0de97-kube-api-access-2qm7n\") pod \"nova-operator-controller-manager-79556f57fc-jjk4x\" (UID: \"b7220fb1-add2-490e-9a22-09ca48f0de97\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.761960 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.761995 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxcbt\" (UniqueName: \"kubernetes.io/projected/ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b-kube-api-access-sxcbt\") pod \"test-operator-controller-manager-5cb74df96-2nr8k\" (UID: \"ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" Nov 24 21:52:45 crc kubenswrapper[4767]: E1124 21:52:45.762117 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:52:45 crc kubenswrapper[4767]: E1124 21:52:45.762156 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert podName:7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8 nodeName:}" failed. No retries permitted until 2025-11-24 21:52:46.76214302 +0000 UTC m=+849.679126392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert") pod "infra-operator-controller-manager-d5cc86f4b-wln68" (UID: "7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.771954 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4fcr\" (UniqueName: \"kubernetes.io/projected/aa98c97b-2d21-481f-9ddf-3e5adce9f626-kube-api-access-n4fcr\") pod \"telemetry-operator-controller-manager-567f98c9d-hpv5b\" (UID: \"aa98c97b-2d21-481f-9ddf-3e5adce9f626\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.774870 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.779019 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5rn7\" (UniqueName: \"kubernetes.io/projected/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-kube-api-access-x5rn7\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.784941 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.789184 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-b9kpf" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.792635 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.793235 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.844433 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.858497 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.860713 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.861284 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-22jwc" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.861396 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.861481 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.863191 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxcbt\" (UniqueName: \"kubernetes.io/projected/ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b-kube-api-access-sxcbt\") pod \"test-operator-controller-manager-5cb74df96-2nr8k\" (UID: \"ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.882422 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.883154 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7"] Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.883222 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.893746 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-f4v86" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.894151 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxcbt\" (UniqueName: \"kubernetes.io/projected/ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b-kube-api-access-sxcbt\") pod \"test-operator-controller-manager-5cb74df96-2nr8k\" (UID: \"ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.905702 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.917621 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.930373 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.952638 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.975840 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cc2z\" (UniqueName: \"kubernetes.io/projected/52982ab5-3f6d-47fa-baf9-c6957e170ffe-kube-api-access-8cc2z\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4sfq7\" (UID: \"52982ab5-3f6d-47fa-baf9-c6957e170ffe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.975946 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtv4\" (UniqueName: \"kubernetes.io/projected/0265238d-c56a-428f-a359-a2e9cff33593-kube-api-access-qbtv4\") pod \"watcher-operator-controller-manager-5c96f79b7c-4msp7\" (UID: \"0265238d-c56a-428f-a359-a2e9cff33593\") " pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.976000 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.976044 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtn5p\" (UniqueName: \"kubernetes.io/projected/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-kube-api-access-xtn5p\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:45 crc kubenswrapper[4767]: I1124 21:52:45.976063 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.012944 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.077552 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbtv4\" (UniqueName: \"kubernetes.io/projected/0265238d-c56a-428f-a359-a2e9cff33593-kube-api-access-qbtv4\") pod \"watcher-operator-controller-manager-5c96f79b7c-4msp7\" (UID: \"0265238d-c56a-428f-a359-a2e9cff33593\") " pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.077648 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.077727 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtn5p\" (UniqueName: \"kubernetes.io/projected/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-kube-api-access-xtn5p\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.077770 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.077800 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cc2z\" (UniqueName: \"kubernetes.io/projected/52982ab5-3f6d-47fa-baf9-c6957e170ffe-kube-api-access-8cc2z\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4sfq7\" (UID: \"52982ab5-3f6d-47fa-baf9-c6957e170ffe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.078154 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.078213 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs podName:0ac691b7-c7ad-467b-b4f2-46e9d52c450f nodeName:}" failed. No retries permitted until 2025-11-24 21:52:46.578195739 +0000 UTC m=+849.495179111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs") pod "openstack-operator-controller-manager-5d749b69b6-ns4rd" (UID: "0ac691b7-c7ad-467b-b4f2-46e9d52c450f") : secret "webhook-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.078310 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.078381 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs podName:0ac691b7-c7ad-467b-b4f2-46e9d52c450f nodeName:}" failed. No retries permitted until 2025-11-24 21:52:46.578358743 +0000 UTC m=+849.495342195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs") pod "openstack-operator-controller-manager-5d749b69b6-ns4rd" (UID: "0ac691b7-c7ad-467b-b4f2-46e9d52c450f") : secret "metrics-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.096651 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbtv4\" (UniqueName: \"kubernetes.io/projected/0265238d-c56a-428f-a359-a2e9cff33593-kube-api-access-qbtv4\") pod \"watcher-operator-controller-manager-5c96f79b7c-4msp7\" (UID: \"0265238d-c56a-428f-a359-a2e9cff33593\") " pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.097313 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtn5p\" (UniqueName: \"kubernetes.io/projected/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-kube-api-access-xtn5p\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.099242 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cc2z\" (UniqueName: \"kubernetes.io/projected/52982ab5-3f6d-47fa-baf9-c6957e170ffe-kube-api-access-8cc2z\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4sfq7\" (UID: \"52982ab5-3f6d-47fa-baf9-c6957e170ffe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.180986 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.181160 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.181211 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert podName:1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d nodeName:}" failed. No retries permitted until 2025-11-24 21:52:47.181197848 +0000 UTC m=+850.098181220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" (UID: "1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.328669 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.354596 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.433002 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87"] Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.588047 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.588307 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.588332 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.588405 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs podName:0ac691b7-c7ad-467b-b4f2-46e9d52c450f nodeName:}" failed. No retries permitted until 2025-11-24 21:52:47.58838725 +0000 UTC m=+850.505370622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs") pod "openstack-operator-controller-manager-5d749b69b6-ns4rd" (UID: "0ac691b7-c7ad-467b-b4f2-46e9d52c450f") : secret "webhook-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.588444 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: E1124 21:52:46.588487 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs podName:0ac691b7-c7ad-467b-b4f2-46e9d52c450f nodeName:}" failed. No retries permitted until 2025-11-24 21:52:47.588473862 +0000 UTC m=+850.505457234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs") pod "openstack-operator-controller-manager-5d749b69b6-ns4rd" (UID: "0ac691b7-c7ad-467b-b4f2-46e9d52c450f") : secret "metrics-server-cert" not found Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.791168 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.795901 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-wln68\" (UID: \"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.840024 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km"] Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.850340 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7"] Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.854651 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw"] Nov 24 21:52:46 crc kubenswrapper[4767]: W1124 21:52:46.856148 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb193ac_a6d0_4981_91b8_234d77ab2cd7.slice/crio-e7f6159a05463252bc72d467e80d549e58b7f47b36ed0ac19e638fd933e9b67c WatchSource:0}: Error finding container e7f6159a05463252bc72d467e80d549e58b7f47b36ed0ac19e638fd933e9b67c: Status 404 returned error can't find the container with id e7f6159a05463252bc72d467e80d549e58b7f47b36ed0ac19e638fd933e9b67c Nov 24 21:52:46 crc kubenswrapper[4767]: W1124 21:52:46.861974 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5abc7b42_2e06_4722_b3e4_aab9de868251.slice/crio-365ad2de9e8e23eef8b9674902d741e5e3a23eff0aa89a8671e944a7387fae23 WatchSource:0}: Error finding container 365ad2de9e8e23eef8b9674902d741e5e3a23eff0aa89a8671e944a7387fae23: Status 404 returned error can't find the container with id 365ad2de9e8e23eef8b9674902d741e5e3a23eff0aa89a8671e944a7387fae23 Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.868209 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg"] Nov 24 21:52:46 crc kubenswrapper[4767]: I1124 21:52:46.927214 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.196863 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.197429 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.197845 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert podName:1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d nodeName:}" failed. No retries permitted until 2025-11-24 21:52:49.197807322 +0000 UTC m=+852.114790694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" (UID: "1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.225386 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.249841 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.256233 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" event={"ID":"e8cfe9d6-3aba-44af-9dbc-679d34dc98d0","Type":"ContainerStarted","Data":"5106d04ac583473e220ecea926252574510d83f4ad2731b82c620b28d64a2591"} Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.258416 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" event={"ID":"5abc7b42-2e06-4722-b3e4-aab9de868251","Type":"ContainerStarted","Data":"365ad2de9e8e23eef8b9674902d741e5e3a23eff0aa89a8671e944a7387fae23"} Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.259050 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" event={"ID":"44564b48-f353-4b3f-a0b7-b42ecd1bf838","Type":"ContainerStarted","Data":"10b12c0f12a48efdb9d5216f2d9b45c44ceee24dfc31d5e9e0653147b580b10a"} Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.259648 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" event={"ID":"1cb193ac-a6d0-4981-91b8-234d77ab2cd7","Type":"ContainerStarted","Data":"e7f6159a05463252bc72d467e80d549e58b7f47b36ed0ac19e638fd933e9b67c"} Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.259867 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.260828 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" event={"ID":"945744e6-8179-45cb-a020-de9b73fa89a1","Type":"ContainerStarted","Data":"790a53061defe8a8962177c29b81792f08c93b5a06a936cebe6bbdec93f1c20a"} Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.275391 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb"] Nov 24 21:52:47 crc kubenswrapper[4767]: W1124 21:52:47.278622 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45530d57_164d_48f7_89e1_0a0f85ccb029.slice/crio-717eb856a8febe98fee6e1819264c6957b26d467202ece47fff617fc6ad4a472 WatchSource:0}: Error finding container 717eb856a8febe98fee6e1819264c6957b26d467202ece47fff617fc6ad4a472: Status 404 returned error can't find the container with id 717eb856a8febe98fee6e1819264c6957b26d467202ece47fff617fc6ad4a472 Nov 24 21:52:47 crc kubenswrapper[4767]: W1124 21:52:47.279510 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97bfe853_33bd_4dfc_b7bf_f9c82d9d0ba8.slice/crio-23c75c9b98c2d8f5cb1abbba99362c96a0314f3a0727605694c44bf83183b3b8 WatchSource:0}: Error finding container 23c75c9b98c2d8f5cb1abbba99362c96a0314f3a0727605694c44bf83183b3b8: Status 404 returned error can't find the container with id 23c75c9b98c2d8f5cb1abbba99362c96a0314f3a0727605694c44bf83183b3b8 Nov 24 21:52:47 crc kubenswrapper[4767]: W1124 21:52:47.281491 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacb0f017_b32b_4d0a_98b5_bd8d4db084ea.slice/crio-3588e8ee2db8ce180b13bf24655e5079b5faafda1f302397ce88c7e5d079f745 WatchSource:0}: Error finding container 3588e8ee2db8ce180b13bf24655e5079b5faafda1f302397ce88c7e5d079f745: Status 404 returned error can't find the container with id 3588e8ee2db8ce180b13bf24655e5079b5faafda1f302397ce88c7e5d079f745 Nov 24 21:52:47 crc kubenswrapper[4767]: W1124 21:52:47.282602 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78ad5af3_1937_484b_bd41_9a7ac9d09db3.slice/crio-ec535b3c05dfe94c6f06398631ef89655eb16327cfa610523d8f4632f03a4f1a WatchSource:0}: Error finding container ec535b3c05dfe94c6f06398631ef89655eb16327cfa610523d8f4632f03a4f1a: Status 404 returned error can't find the container with id ec535b3c05dfe94c6f06398631ef89655eb16327cfa610523d8f4632f03a4f1a Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.283341 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.287770 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8"] Nov 24 21:52:47 crc kubenswrapper[4767]: W1124 21:52:47.290107 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7220fb1_add2_490e_9a22_09ca48f0de97.slice/crio-9563a017d20a6b3340b3fbc8d778cad5452f8ff0a37b3e470af1d8de5095da6e WatchSource:0}: Error finding container 9563a017d20a6b3340b3fbc8d778cad5452f8ff0a37b3e470af1d8de5095da6e: Status 404 returned error can't find the container with id 9563a017d20a6b3340b3fbc8d778cad5452f8ff0a37b3e470af1d8de5095da6e Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.291848 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn"] Nov 24 21:52:47 crc kubenswrapper[4767]: W1124 21:52:47.314666 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb5e8630_50f8_4d2c_a77a_d23b6441386a.slice/crio-b76c6ff9ff739a129faf2a4f8fe375490078803627dcb81a317b85fd7f88251d WatchSource:0}: Error finding container b76c6ff9ff739a129faf2a4f8fe375490078803627dcb81a317b85fd7f88251d: Status 404 returned error can't find the container with id b76c6ff9ff739a129faf2a4f8fe375490078803627dcb81a317b85fd7f88251d Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.331497 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.350388 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z"] Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.360203 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7fkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-g7fnm_openstack-operators(ea0b61d0-e20f-40eb-a3a8-329ff271f057): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.360491 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnz47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-7bvtn_openstack-operators(0266992d-7010-4fa3-9a94-2a7ab457f4ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.362399 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9"] Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.363956 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnz47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-7bvtn_openstack-operators(0266992d-7010-4fa3-9a94-2a7ab457f4ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.365019 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7fkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-g7fnm_openstack-operators(ea0b61d0-e20f-40eb-a3a8-329ff271f057): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.365186 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" podUID="0266992d-7010-4fa3-9a94-2a7ab457f4ca" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.366747 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" podUID="ea0b61d0-e20f-40eb-a3a8-329ff271f057" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.375354 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6tc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-vdr7z_openstack-operators(bde0dfef-808a-4851-81a8-968847586652): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.379108 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.381856 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k"] Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.383956 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6tc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-vdr7z_openstack-operators(bde0dfef-808a-4851-81a8-968847586652): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.385508 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" podUID="bde0dfef-808a-4851-81a8-968847586652" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.386476 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.163:5001/openstack-k8s-operators/watcher-operator:49918c72231b2800072f7b29d099eb600032bdb4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbtv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5c96f79b7c-4msp7_openstack-operators(0265238d-c56a-428f-a359-a2e9cff33593): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.386666 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8cc2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4sfq7_openstack-operators(52982ab5-3f6d-47fa-baf9-c6957e170ffe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.386760 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4fcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-hpv5b_openstack-operators(aa98c97b-2d21-481f-9ddf-3e5adce9f626): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.388057 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" podUID="52982ab5-3f6d-47fa-baf9-c6957e170ffe" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.390504 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4fcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-hpv5b_openstack-operators(aa98c97b-2d21-481f-9ddf-3e5adce9f626): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.390491 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxcbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-2nr8k_openstack-operators(ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.390612 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbtv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5c96f79b7c-4msp7_openstack-operators(0265238d-c56a-428f-a359-a2e9cff33593): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.392281 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" podUID="0265238d-c56a-428f-a359-a2e9cff33593" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.392411 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" podUID="aa98c97b-2d21-481f-9ddf-3e5adce9f626" Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.393454 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7"] Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.399407 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxcbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-2nr8k_openstack-operators(ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.401057 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" podUID="ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b" Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.406065 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.420683 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b"] Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.575405 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68"] Nov 24 21:52:47 crc kubenswrapper[4767]: W1124 21:52:47.585813 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f35807c_54db_4e6e_aeb1_8f8b15b6cbb8.slice/crio-28bc0878de4c1c59246c05c2870afa281e6ba2cfc80e22aae56c91691efdd97d WatchSource:0}: Error finding container 28bc0878de4c1c59246c05c2870afa281e6ba2cfc80e22aae56c91691efdd97d: Status 404 returned error can't find the container with id 28bc0878de4c1c59246c05c2870afa281e6ba2cfc80e22aae56c91691efdd97d Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.604975 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:47 crc kubenswrapper[4767]: I1124 21:52:47.605102 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.605231 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.605295 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs podName:0ac691b7-c7ad-467b-b4f2-46e9d52c450f nodeName:}" failed. No retries permitted until 2025-11-24 21:52:49.605263191 +0000 UTC m=+852.522246563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs") pod "openstack-operator-controller-manager-5d749b69b6-ns4rd" (UID: "0ac691b7-c7ad-467b-b4f2-46e9d52c450f") : secret "webhook-server-cert" not found Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.605370 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 21:52:47 crc kubenswrapper[4767]: E1124 21:52:47.605398 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs podName:0ac691b7-c7ad-467b-b4f2-46e9d52c450f nodeName:}" failed. No retries permitted until 2025-11-24 21:52:49.605388505 +0000 UTC m=+852.522371877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs") pod "openstack-operator-controller-manager-5d749b69b6-ns4rd" (UID: "0ac691b7-c7ad-467b-b4f2-46e9d52c450f") : secret "metrics-server-cert" not found Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.269910 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" event={"ID":"45530d57-164d-48f7-89e1-0a0f85ccb029","Type":"ContainerStarted","Data":"717eb856a8febe98fee6e1819264c6957b26d467202ece47fff617fc6ad4a472"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.271051 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" event={"ID":"bde0dfef-808a-4851-81a8-968847586652","Type":"ContainerStarted","Data":"0c0b4aaaa511b4ce21e34badbc7b5dd66316eb3a2e194ab2efabce39f074f473"} Nov 24 21:52:48 crc kubenswrapper[4767]: E1124 21:52:48.274153 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" podUID="bde0dfef-808a-4851-81a8-968847586652" Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.274624 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" event={"ID":"a5a1f537-9c37-40a5-9f2f-a9ec762ca458","Type":"ContainerStarted","Data":"630fc23866389ae0e5d43b787b1f5f94743110891c095b482a589fd632ed850a"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.276107 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" event={"ID":"acb0f017-b32b-4d0a-98b5-bd8d4db084ea","Type":"ContainerStarted","Data":"3588e8ee2db8ce180b13bf24655e5079b5faafda1f302397ce88c7e5d079f745"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.277864 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" event={"ID":"ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b","Type":"ContainerStarted","Data":"ad2242862c29ca16ca53741aecf9b252b8e4527499608bdcd1125e4b4c622f13"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.280880 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" event={"ID":"0265238d-c56a-428f-a359-a2e9cff33593","Type":"ContainerStarted","Data":"e99758338e73974cd3542314706e1614a34f44fa7d53c578198f4ca19b707f44"} Nov 24 21:52:48 crc kubenswrapper[4767]: E1124 21:52:48.282750 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.163:5001/openstack-k8s-operators/watcher-operator:49918c72231b2800072f7b29d099eb600032bdb4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" podUID="0265238d-c56a-428f-a359-a2e9cff33593" Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.287478 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" event={"ID":"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8","Type":"ContainerStarted","Data":"28bc0878de4c1c59246c05c2870afa281e6ba2cfc80e22aae56c91691efdd97d"} Nov 24 21:52:48 crc kubenswrapper[4767]: E1124 21:52:48.287695 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" podUID="ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b" Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.294436 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" event={"ID":"c4266ab7-4886-4015-9a87-6454fc59e9c5","Type":"ContainerStarted","Data":"c6775b33457cbd4b79fcaa2603f5d716f0481ee73cbc7f2da7d7ca5fca583efc"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.296645 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" event={"ID":"ea0b61d0-e20f-40eb-a3a8-329ff271f057","Type":"ContainerStarted","Data":"ed8bd90a23b3c527f59b926a581a3a25387be7293832ba74a39ead4186b5ebaf"} Nov 24 21:52:48 crc kubenswrapper[4767]: E1124 21:52:48.298796 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" podUID="ea0b61d0-e20f-40eb-a3a8-329ff271f057" Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.307988 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" event={"ID":"0266992d-7010-4fa3-9a94-2a7ab457f4ca","Type":"ContainerStarted","Data":"65d67f5a61fbc00bd80a3feecc84baaefc8d74dd6f65a1e5e9207be09fee1483"} Nov 24 21:52:48 crc kubenswrapper[4767]: E1124 21:52:48.310452 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" podUID="0266992d-7010-4fa3-9a94-2a7ab457f4ca" Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.310548 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" event={"ID":"fb5e8630-50f8-4d2c-a77a-d23b6441386a","Type":"ContainerStarted","Data":"b76c6ff9ff739a129faf2a4f8fe375490078803627dcb81a317b85fd7f88251d"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.311505 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" event={"ID":"78ad5af3-1937-484b-bd41-9a7ac9d09db3","Type":"ContainerStarted","Data":"ec535b3c05dfe94c6f06398631ef89655eb16327cfa610523d8f4632f03a4f1a"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.312510 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" event={"ID":"b7220fb1-add2-490e-9a22-09ca48f0de97","Type":"ContainerStarted","Data":"9563a017d20a6b3340b3fbc8d778cad5452f8ff0a37b3e470af1d8de5095da6e"} Nov 24 21:52:48 crc kubenswrapper[4767]: E1124 21:52:48.328929 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" podUID="52982ab5-3f6d-47fa-baf9-c6957e170ffe" Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.332396 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" event={"ID":"97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8","Type":"ContainerStarted","Data":"23c75c9b98c2d8f5cb1abbba99362c96a0314f3a0727605694c44bf83183b3b8"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.332433 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" event={"ID":"52982ab5-3f6d-47fa-baf9-c6957e170ffe","Type":"ContainerStarted","Data":"3039c2abdd4f7d22bbcbcb1e4ef52f7228a870fa34e7c3081d2e5bbba5d6d070"} Nov 24 21:52:48 crc kubenswrapper[4767]: I1124 21:52:48.332455 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" event={"ID":"aa98c97b-2d21-481f-9ddf-3e5adce9f626","Type":"ContainerStarted","Data":"e1c5c671078a879297d9179d167aa526e910514e614b663640ff518294a9ac79"} Nov 24 21:52:48 crc kubenswrapper[4767]: E1124 21:52:48.344458 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" podUID="aa98c97b-2d21-481f-9ddf-3e5adce9f626" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.251763 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.259224 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc\" (UID: \"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:49 crc kubenswrapper[4767]: E1124 21:52:49.336468 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" podUID="52982ab5-3f6d-47fa-baf9-c6957e170ffe" Nov 24 21:52:49 crc kubenswrapper[4767]: E1124 21:52:49.337614 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.163:5001/openstack-k8s-operators/watcher-operator:49918c72231b2800072f7b29d099eb600032bdb4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" podUID="0265238d-c56a-428f-a359-a2e9cff33593" Nov 24 21:52:49 crc kubenswrapper[4767]: E1124 21:52:49.337620 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" podUID="ea0b61d0-e20f-40eb-a3a8-329ff271f057" Nov 24 21:52:49 crc kubenswrapper[4767]: E1124 21:52:49.338154 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" podUID="aa98c97b-2d21-481f-9ddf-3e5adce9f626" Nov 24 21:52:49 crc kubenswrapper[4767]: E1124 21:52:49.338450 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" podUID="ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b" Nov 24 21:52:49 crc kubenswrapper[4767]: E1124 21:52:49.338557 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" podUID="bde0dfef-808a-4851-81a8-968847586652" Nov 24 21:52:49 crc kubenswrapper[4767]: E1124 21:52:49.338609 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" podUID="0266992d-7010-4fa3-9a94-2a7ab457f4ca" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.478194 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.658074 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.658291 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.662838 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-webhook-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.669093 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac691b7-c7ad-467b-b4f2-46e9d52c450f-metrics-certs\") pod \"openstack-operator-controller-manager-5d749b69b6-ns4rd\" (UID: \"0ac691b7-c7ad-467b-b4f2-46e9d52c450f\") " pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:49 crc kubenswrapper[4767]: I1124 21:52:49.934119 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:52:52 crc kubenswrapper[4767]: I1124 21:52:52.368849 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:52 crc kubenswrapper[4767]: I1124 21:52:52.369208 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:52 crc kubenswrapper[4767]: I1124 21:52:52.429347 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:53 crc kubenswrapper[4767]: I1124 21:52:53.440031 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:53 crc kubenswrapper[4767]: I1124 21:52:53.511425 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gwdqz"] Nov 24 21:52:55 crc kubenswrapper[4767]: I1124 21:52:55.372906 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gwdqz" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="registry-server" containerID="cri-o://041597e2091ee835339a2a86e7c56222e3e686fe2852a4c0ba56c47e81b6d14d" gracePeriod=2 Nov 24 21:52:56 crc kubenswrapper[4767]: I1124 21:52:56.382523 4767 generic.go:334] "Generic (PLEG): container finished" podID="76272833-44ed-4e2f-b20f-1479146df875" containerID="041597e2091ee835339a2a86e7c56222e3e686fe2852a4c0ba56c47e81b6d14d" exitCode=0 Nov 24 21:52:56 crc kubenswrapper[4767]: I1124 21:52:56.382586 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gwdqz" event={"ID":"76272833-44ed-4e2f-b20f-1479146df875","Type":"ContainerDied","Data":"041597e2091ee835339a2a86e7c56222e3e686fe2852a4c0ba56c47e81b6d14d"} Nov 24 21:52:58 crc kubenswrapper[4767]: E1124 21:52:58.543586 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991" Nov 24 21:52:58 crc kubenswrapper[4767]: E1124 21:52:58.544042 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pzw45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68b95954c9-ns9km_openstack-operators(e8cfe9d6-3aba-44af-9dbc-679d34dc98d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.312977 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.408436 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gwdqz" event={"ID":"76272833-44ed-4e2f-b20f-1479146df875","Type":"ContainerDied","Data":"2373e750fdf0d5119f38118b8de7133fbd87083a6e7953faaf7c819ba6df04f1"} Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.408508 4767 scope.go:117] "RemoveContainer" containerID="041597e2091ee835339a2a86e7c56222e3e686fe2852a4c0ba56c47e81b6d14d" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.408693 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gwdqz" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.414170 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-catalog-content\") pod \"76272833-44ed-4e2f-b20f-1479146df875\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.414238 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg4jt\" (UniqueName: \"kubernetes.io/projected/76272833-44ed-4e2f-b20f-1479146df875-kube-api-access-mg4jt\") pod \"76272833-44ed-4e2f-b20f-1479146df875\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.414494 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-utilities\") pod \"76272833-44ed-4e2f-b20f-1479146df875\" (UID: \"76272833-44ed-4e2f-b20f-1479146df875\") " Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.425333 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-utilities" (OuterVolumeSpecName: "utilities") pod "76272833-44ed-4e2f-b20f-1479146df875" (UID: "76272833-44ed-4e2f-b20f-1479146df875"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.444461 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76272833-44ed-4e2f-b20f-1479146df875-kube-api-access-mg4jt" (OuterVolumeSpecName: "kube-api-access-mg4jt") pod "76272833-44ed-4e2f-b20f-1479146df875" (UID: "76272833-44ed-4e2f-b20f-1479146df875"). InnerVolumeSpecName "kube-api-access-mg4jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.476165 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" event={"ID":"fb5e8630-50f8-4d2c-a77a-d23b6441386a","Type":"ContainerStarted","Data":"db14c7ef177442e6c0e4f080e77b756a8c14e6dea3d7e8a7937eca1cd31a54ed"} Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.504210 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76272833-44ed-4e2f-b20f-1479146df875" (UID: "76272833-44ed-4e2f-b20f-1479146df875"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.523700 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd"] Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.525007 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.525026 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76272833-44ed-4e2f-b20f-1479146df875-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.525039 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg4jt\" (UniqueName: \"kubernetes.io/projected/76272833-44ed-4e2f-b20f-1479146df875-kube-api-access-mg4jt\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.590428 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" event={"ID":"b7220fb1-add2-490e-9a22-09ca48f0de97","Type":"ContainerStarted","Data":"060149fdc3b9a9a3e2b84322044626eb8d5c2ece4c9b879f4c5da96537662269"} Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.610934 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc"] Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.624711 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" event={"ID":"1cb193ac-a6d0-4981-91b8-234d77ab2cd7","Type":"ContainerStarted","Data":"4295fe6d181907ecdf5573c1afc40e8bfef397220c6092ef6e689fe3fdbf582e"} Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.635317 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" event={"ID":"a5a1f537-9c37-40a5-9f2f-a9ec762ca458","Type":"ContainerStarted","Data":"2f70149e936ed49fc4e8cacfd25fd8faeeda7d97132b61008e455df6ac67a5e6"} Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.742521 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gwdqz"] Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.749899 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gwdqz"] Nov 24 21:52:59 crc kubenswrapper[4767]: W1124 21:52:59.788436 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac691b7_c7ad_467b_b4f2_46e9d52c450f.slice/crio-7c2e1cd1e3ffbe01cd9d4fb304d67d73bdf2dd7232b50aac26ef41132d1f37fc WatchSource:0}: Error finding container 7c2e1cd1e3ffbe01cd9d4fb304d67d73bdf2dd7232b50aac26ef41132d1f37fc: Status 404 returned error can't find the container with id 7c2e1cd1e3ffbe01cd9d4fb304d67d73bdf2dd7232b50aac26ef41132d1f37fc Nov 24 21:52:59 crc kubenswrapper[4767]: I1124 21:52:59.849536 4767 scope.go:117] "RemoveContainer" containerID="edf2f4ecabe9473ce800b146a1962220cf956873e8b1e516d4d8dcabdbe75501" Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.323643 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76272833-44ed-4e2f-b20f-1479146df875" path="/var/lib/kubelet/pods/76272833-44ed-4e2f-b20f-1479146df875/volumes" Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.422493 4767 scope.go:117] "RemoveContainer" containerID="fabe81467ba63f27af06164b2b9df860ec778ec595ebba341ca7207ba3577f98" Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.642796 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" event={"ID":"0ac691b7-c7ad-467b-b4f2-46e9d52c450f","Type":"ContainerStarted","Data":"7c2e1cd1e3ffbe01cd9d4fb304d67d73bdf2dd7232b50aac26ef41132d1f37fc"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.645223 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" event={"ID":"5abc7b42-2e06-4722-b3e4-aab9de868251","Type":"ContainerStarted","Data":"819c98646880bafc41d4d7fb23dadd03cfb363740f98d5714cdb6d54ec53552e"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.647653 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" event={"ID":"45530d57-164d-48f7-89e1-0a0f85ccb029","Type":"ContainerStarted","Data":"19db06610d9bfbb9e4df06724ac3553663ba67b5956cd079a2cc03caaced9028"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.649309 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" event={"ID":"44564b48-f353-4b3f-a0b7-b42ecd1bf838","Type":"ContainerStarted","Data":"a2f7afc275bc8b7fc2cffc4b4c65363ad63726c393d8e8b8703356fea165babb"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.650797 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" event={"ID":"acb0f017-b32b-4d0a-98b5-bd8d4db084ea","Type":"ContainerStarted","Data":"45308a1aa7a3ff3daa1440ecf364b35eb93c243789377145eff05de6621f1370"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.652487 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" event={"ID":"945744e6-8179-45cb-a020-de9b73fa89a1","Type":"ContainerStarted","Data":"8bb22e4e7caed9285a10b704f3cd485a1d375a1261556e52eb032eb1b361f9c9"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.656858 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" event={"ID":"c4266ab7-4886-4015-9a87-6454fc59e9c5","Type":"ContainerStarted","Data":"51bc7db47e6e0d9530f7eb755e0d308bc3767596686ba62df1bd7096ea380a93"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.663738 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" event={"ID":"97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8","Type":"ContainerStarted","Data":"9cd2f27f6ae4cd70b491064dda2459ca38ca10b442c2219b7d1a3c241bf7f41e"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.666096 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" event={"ID":"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8","Type":"ContainerStarted","Data":"9dda76ac56dfcfc149d14ecfcb89e59fd7d4c3640bd6cb5867d2fec51f0f8b9d"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.668557 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" event={"ID":"78ad5af3-1937-484b-bd41-9a7ac9d09db3","Type":"ContainerStarted","Data":"cd81e88b0b1f7aff330a0837dcbae08ec53556e37961ce8de89e10d225de9fd3"} Nov 24 21:53:00 crc kubenswrapper[4767]: I1124 21:53:00.670231 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" event={"ID":"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d","Type":"ContainerStarted","Data":"548a86e853e91613991b1bde21229673f472f22f40026540c0e3011f0c43e264"} Nov 24 21:53:01 crc kubenswrapper[4767]: I1124 21:53:01.682135 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" event={"ID":"0ac691b7-c7ad-467b-b4f2-46e9d52c450f","Type":"ContainerStarted","Data":"8753a6e5200c535973312cdd2e7e5dd5f1e8b63b8c176d1426e3bb523f8f3696"} Nov 24 21:53:01 crc kubenswrapper[4767]: I1124 21:53:01.682444 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:53:01 crc kubenswrapper[4767]: I1124 21:53:01.719913 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" podStartSLOduration=16.719894273 podStartE2EDuration="16.719894273s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:53:01.708679555 +0000 UTC m=+864.625662927" watchObservedRunningTime="2025-11-24 21:53:01.719894273 +0000 UTC m=+864.636877645" Nov 24 21:53:04 crc kubenswrapper[4767]: E1124 21:53:04.120455 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" podUID="e8cfe9d6-3aba-44af-9dbc-679d34dc98d0" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.721769 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" event={"ID":"1cb193ac-a6d0-4981-91b8-234d77ab2cd7","Type":"ContainerStarted","Data":"2802ca02f484e7ed4373e1784180c45731d3445774d3facf59a35095164e2888"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.723172 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.730557 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.741801 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" event={"ID":"78ad5af3-1937-484b-bd41-9a7ac9d09db3","Type":"ContainerStarted","Data":"8be57694d6db98718ca006459f1ca89855d9d251a61acf5e75ec8a4430affd39"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.742505 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.745522 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" event={"ID":"e8cfe9d6-3aba-44af-9dbc-679d34dc98d0","Type":"ContainerStarted","Data":"801cfd59ecd941772ff0bde325fb210c9005c749f0e29558049dafdd6cd84777"} Nov 24 21:53:04 crc kubenswrapper[4767]: E1124 21:53:04.747317 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991\\\"\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" podUID="e8cfe9d6-3aba-44af-9dbc-679d34dc98d0" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.753225 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hxzx7" podStartSLOduration=3.914668092 podStartE2EDuration="20.75320789s" podCreationTimestamp="2025-11-24 21:52:44 +0000 UTC" firstStartedPulling="2025-11-24 21:52:46.860603465 +0000 UTC m=+849.777586837" lastFinishedPulling="2025-11-24 21:53:03.699143263 +0000 UTC m=+866.616126635" observedRunningTime="2025-11-24 21:53:04.747612981 +0000 UTC m=+867.664596353" watchObservedRunningTime="2025-11-24 21:53:04.75320789 +0000 UTC m=+867.670191262" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.761336 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" event={"ID":"97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8","Type":"ContainerStarted","Data":"5e78cb6a24547c79153f2e8f05ea44463c27001b5e669999907abd90de5493ff"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.761492 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.766704 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" event={"ID":"fb5e8630-50f8-4d2c-a77a-d23b6441386a","Type":"ContainerStarted","Data":"acebf46e785555a5724ac744ea01e088adeb78cb6fa26c2d33e5a97ffe6cbf46"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.767806 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.772305 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.782965 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" event={"ID":"7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8","Type":"ContainerStarted","Data":"7a714df6f9ca6fdeb81fa1b526815b94fa3be89c1d0304a13c3cd23da9abc89b"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.783182 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.815187 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" podStartSLOduration=3.458864945 podStartE2EDuration="19.815164636s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.285418755 +0000 UTC m=+850.202402127" lastFinishedPulling="2025-11-24 21:53:03.641718446 +0000 UTC m=+866.558701818" observedRunningTime="2025-11-24 21:53:04.801592051 +0000 UTC m=+867.718575443" watchObservedRunningTime="2025-11-24 21:53:04.815164636 +0000 UTC m=+867.732148008" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.852665 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" event={"ID":"acb0f017-b32b-4d0a-98b5-bd8d4db084ea","Type":"ContainerStarted","Data":"805ea09b6c48e14faaabb287031d9f1df597397863bb74dfac6388e7d02b4d50"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.853571 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.861396 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" event={"ID":"ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b","Type":"ContainerStarted","Data":"27f5f572289e073f7192e1076d4898d5daedcd588d3251724f97b8400e9ea872"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.861460 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" event={"ID":"ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b","Type":"ContainerStarted","Data":"fec8b568fc6515479e2bcbbb022d4991b9bec88b0b86ca790174bd5647d87a13"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.862170 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.893139 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-7v5f9" podStartSLOduration=3.632428864 podStartE2EDuration="19.893117905s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.320204671 +0000 UTC m=+850.237188043" lastFinishedPulling="2025-11-24 21:53:03.580893702 +0000 UTC m=+866.497877084" observedRunningTime="2025-11-24 21:53:04.853787911 +0000 UTC m=+867.770771293" watchObservedRunningTime="2025-11-24 21:53:04.893117905 +0000 UTC m=+867.810101277" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.899280 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" event={"ID":"b7220fb1-add2-490e-9a22-09ca48f0de97","Type":"ContainerStarted","Data":"31771f2e9174f34913158648c2b72370b337986b410c141428123969e56034ac"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.900475 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.903813 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.903824 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" podStartSLOduration=3.564580651 podStartE2EDuration="19.903801308s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.281858984 +0000 UTC m=+850.198842356" lastFinishedPulling="2025-11-24 21:53:03.621079611 +0000 UTC m=+866.538063013" observedRunningTime="2025-11-24 21:53:04.879812048 +0000 UTC m=+867.796795420" watchObservedRunningTime="2025-11-24 21:53:04.903801308 +0000 UTC m=+867.820784680" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.917129 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" event={"ID":"45530d57-164d-48f7-89e1-0a0f85ccb029","Type":"ContainerStarted","Data":"42f0f7f64da2e51c5138a3785f99651a84995558f967f352af913a8dbc1d4c64"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.925031 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" podStartSLOduration=3.869759442 podStartE2EDuration="19.925013789s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.588540397 +0000 UTC m=+850.505523769" lastFinishedPulling="2025-11-24 21:53:03.643794744 +0000 UTC m=+866.560778116" observedRunningTime="2025-11-24 21:53:04.923189378 +0000 UTC m=+867.840172750" watchObservedRunningTime="2025-11-24 21:53:04.925013789 +0000 UTC m=+867.841997171" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.940621 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" event={"ID":"44564b48-f353-4b3f-a0b7-b42ecd1bf838","Type":"ContainerStarted","Data":"8c8f74a66089b4fb968f9b68fa852238c135ea3d86375a4f2237fa0ff683e339"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.940672 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.940718 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.940729 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.947857 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" event={"ID":"c4266ab7-4886-4015-9a87-6454fc59e9c5","Type":"ContainerStarted","Data":"19c6c6e0ba99ed7692ba1d383f4979351a2ce9a0e94390d30b41948faa02dfa5"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.949793 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.950259 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.986653 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" event={"ID":"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d","Type":"ContainerStarted","Data":"3ff28c5b9cbeb11ac5f2d1bdc22be3a3578cdd811218d775ebd475776eb1e2d2"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.986850 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" event={"ID":"1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d","Type":"ContainerStarted","Data":"795ce954fb892b97b965b16b7c0d938386e536259778548b2c1940578e95f6b4"} Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.987021 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:53:04 crc kubenswrapper[4767]: I1124 21:53:04.998085 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" podStartSLOduration=3.664613487 podStartE2EDuration="19.9980637s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.283968334 +0000 UTC m=+850.200951706" lastFinishedPulling="2025-11-24 21:53:03.617418537 +0000 UTC m=+866.534401919" observedRunningTime="2025-11-24 21:53:04.950821221 +0000 UTC m=+867.867804593" watchObservedRunningTime="2025-11-24 21:53:04.9980637 +0000 UTC m=+867.915047072" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.008950 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zdlcp" podStartSLOduration=3.556894014 podStartE2EDuration="20.008934307s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.281721081 +0000 UTC m=+850.198704443" lastFinishedPulling="2025-11-24 21:53:03.733761364 +0000 UTC m=+866.650744736" observedRunningTime="2025-11-24 21:53:04.984904607 +0000 UTC m=+867.901887979" watchObservedRunningTime="2025-11-24 21:53:05.008934307 +0000 UTC m=+867.925917669" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.012076 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" event={"ID":"aa98c97b-2d21-481f-9ddf-3e5adce9f626","Type":"ContainerStarted","Data":"cdef57eff44c41872418fe196113b76227f3720ab18b6ebcb6fe546fb0dde36c"} Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.012103 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" event={"ID":"aa98c97b-2d21-481f-9ddf-3e5adce9f626","Type":"ContainerStarted","Data":"a49f3cacc4e9e48febed9105721e73fb073eba1c5ab2df5fa661eea9b32f4d43"} Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.012851 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.016676 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" event={"ID":"945744e6-8179-45cb-a020-de9b73fa89a1","Type":"ContainerStarted","Data":"c1ddeb4116018bd6af44393fd71bbcbdd932d298e080ba1ec725a7cde20b3126"} Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.017658 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.023091 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" podStartSLOduration=3.729467594 podStartE2EDuration="20.023081198s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.300971926 +0000 UTC m=+850.217955298" lastFinishedPulling="2025-11-24 21:53:03.59458553 +0000 UTC m=+866.511568902" observedRunningTime="2025-11-24 21:53:05.005545631 +0000 UTC m=+867.922529003" watchObservedRunningTime="2025-11-24 21:53:05.023081198 +0000 UTC m=+867.940064570" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.027189 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.045342 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" event={"ID":"5abc7b42-2e06-4722-b3e4-aab9de868251","Type":"ContainerStarted","Data":"7e5bc26f85c4c20e8005d06cba8b3e4c9338f5ba8798d4adf56477eff70ab13b"} Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.046135 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.063968 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.080332 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" event={"ID":"a5a1f537-9c37-40a5-9f2f-a9ec762ca458","Type":"ContainerStarted","Data":"dc6477b63c4ec7dd5cbf77aca87a504ea6ee1612828ffcbc47ffe0c78e7ad215"} Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.082834 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.096804 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" podStartSLOduration=3.89053225 podStartE2EDuration="20.096783147s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.39035395 +0000 UTC m=+850.307337322" lastFinishedPulling="2025-11-24 21:53:03.596604847 +0000 UTC m=+866.513588219" observedRunningTime="2025-11-24 21:53:05.032775163 +0000 UTC m=+867.949758545" watchObservedRunningTime="2025-11-24 21:53:05.096783147 +0000 UTC m=+868.013766519" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.103306 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.109437 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-z6h87" podStartSLOduration=4.010275631 podStartE2EDuration="21.109419155s" podCreationTimestamp="2025-11-24 21:52:44 +0000 UTC" firstStartedPulling="2025-11-24 21:52:46.496387312 +0000 UTC m=+849.413370684" lastFinishedPulling="2025-11-24 21:53:03.595530836 +0000 UTC m=+866.512514208" observedRunningTime="2025-11-24 21:53:05.082712618 +0000 UTC m=+867.999695980" watchObservedRunningTime="2025-11-24 21:53:05.109419155 +0000 UTC m=+868.026402537" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.114390 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" podStartSLOduration=3.906203904 podStartE2EDuration="20.114378006s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.386636894 +0000 UTC m=+850.303620266" lastFinishedPulling="2025-11-24 21:53:03.594810996 +0000 UTC m=+866.511794368" observedRunningTime="2025-11-24 21:53:05.102972862 +0000 UTC m=+868.019956244" watchObservedRunningTime="2025-11-24 21:53:05.114378006 +0000 UTC m=+868.031361378" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.144039 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" podStartSLOduration=16.414018554 podStartE2EDuration="20.144020646s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:59.824950704 +0000 UTC m=+862.741934076" lastFinishedPulling="2025-11-24 21:53:03.554952776 +0000 UTC m=+866.471936168" observedRunningTime="2025-11-24 21:53:05.136446471 +0000 UTC m=+868.053429863" watchObservedRunningTime="2025-11-24 21:53:05.144020646 +0000 UTC m=+868.061004018" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.171179 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-wtwkg" podStartSLOduration=4.409819525 podStartE2EDuration="21.171159645s" podCreationTimestamp="2025-11-24 21:52:44 +0000 UTC" firstStartedPulling="2025-11-24 21:52:46.870110685 +0000 UTC m=+849.787094057" lastFinishedPulling="2025-11-24 21:53:03.631450765 +0000 UTC m=+866.548434177" observedRunningTime="2025-11-24 21:53:05.161338187 +0000 UTC m=+868.078321559" watchObservedRunningTime="2025-11-24 21:53:05.171159645 +0000 UTC m=+868.088143027" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.189016 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" podStartSLOduration=3.858924014 podStartE2EDuration="20.188999931s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.301500211 +0000 UTC m=+850.218483583" lastFinishedPulling="2025-11-24 21:53:03.631576128 +0000 UTC m=+866.548559500" observedRunningTime="2025-11-24 21:53:05.178652358 +0000 UTC m=+868.095635730" watchObservedRunningTime="2025-11-24 21:53:05.188999931 +0000 UTC m=+868.105983303" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.195459 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-c42hw" podStartSLOduration=4.429776011 podStartE2EDuration="21.195445033s" podCreationTimestamp="2025-11-24 21:52:44 +0000 UTC" firstStartedPulling="2025-11-24 21:52:46.865805103 +0000 UTC m=+849.782788475" lastFinishedPulling="2025-11-24 21:53:03.631474125 +0000 UTC m=+866.548457497" observedRunningTime="2025-11-24 21:53:05.194708453 +0000 UTC m=+868.111691815" watchObservedRunningTime="2025-11-24 21:53:05.195445033 +0000 UTC m=+868.112428405" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.401944 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7pth6" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.420876 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-rmwtn" podStartSLOduration=4.111801362 podStartE2EDuration="20.420856483s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.314056007 +0000 UTC m=+850.231039379" lastFinishedPulling="2025-11-24 21:53:03.623111118 +0000 UTC m=+866.540094500" observedRunningTime="2025-11-24 21:53:05.232288158 +0000 UTC m=+868.149271530" watchObservedRunningTime="2025-11-24 21:53:05.420856483 +0000 UTC m=+868.337839855" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.481335 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.481398 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.686530 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9c4l8" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.730970 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-ghpkb" Nov 24 21:53:05 crc kubenswrapper[4767]: I1124 21:53:05.760297 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-wmpbx" Nov 24 21:53:06 crc kubenswrapper[4767]: E1124 21:53:06.088595 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991\\\"\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" podUID="e8cfe9d6-3aba-44af-9dbc-679d34dc98d0" Nov 24 21:53:06 crc kubenswrapper[4767]: I1124 21:53:06.095716 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-wln68" Nov 24 21:53:09 crc kubenswrapper[4767]: I1124 21:53:09.491136 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc" Nov 24 21:53:09 crc kubenswrapper[4767]: I1124 21:53:09.940087 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5d749b69b6-ns4rd" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.114951 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" event={"ID":"bde0dfef-808a-4851-81a8-968847586652","Type":"ContainerStarted","Data":"4233635509c6017a3f585ef63544dc139a713bbc2741e76d4deed8cce5002424"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.115248 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" event={"ID":"bde0dfef-808a-4851-81a8-968847586652","Type":"ContainerStarted","Data":"a4f7cc8d7d5054e300525a30330a9e976b49c3855beadd2d97445f9c980ab916"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.115406 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.116886 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" event={"ID":"0265238d-c56a-428f-a359-a2e9cff33593","Type":"ContainerStarted","Data":"64210083b68b081ae04d310087445786ffa1f8e029f59ac8973fe89a8a9e162f"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.116923 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" event={"ID":"0265238d-c56a-428f-a359-a2e9cff33593","Type":"ContainerStarted","Data":"263a8a02ab3bf6c97e9e96e415318e7721e418958dc7a1392fba999a0e6d38c8"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.117058 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.118590 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" event={"ID":"0266992d-7010-4fa3-9a94-2a7ab457f4ca","Type":"ContainerStarted","Data":"38af708c50cd0aa6db2a387a12dee4d198a635d42ba555c50fcd39c056f3589d"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.118731 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" event={"ID":"0266992d-7010-4fa3-9a94-2a7ab457f4ca","Type":"ContainerStarted","Data":"c814daecc2f272c7fbcb1675720c915f094a6023002a0a2bd242168f7fae38c3"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.118911 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.119980 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" event={"ID":"52982ab5-3f6d-47fa-baf9-c6957e170ffe","Type":"ContainerStarted","Data":"8d127aa43682a6c2c15ecc5ad4aff47ec249710f004687b5b279fcb352bea2ae"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.121705 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" event={"ID":"ea0b61d0-e20f-40eb-a3a8-329ff271f057","Type":"ContainerStarted","Data":"10415686c90b8fda3b0f96fd4b25a2cc8a9a5da4f0acf5ca6ee275a4b97e7bdb"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.121733 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" event={"ID":"ea0b61d0-e20f-40eb-a3a8-329ff271f057","Type":"ContainerStarted","Data":"2de59fd7b153aedceeb46d62240f8935e73f6503219dff1c7366b8d76da0a114"} Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.122363 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.135120 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" podStartSLOduration=3.331661041 podStartE2EDuration="25.135100983s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.37518516 +0000 UTC m=+850.292168532" lastFinishedPulling="2025-11-24 21:53:09.178625102 +0000 UTC m=+872.095608474" observedRunningTime="2025-11-24 21:53:10.13146662 +0000 UTC m=+873.048449992" watchObservedRunningTime="2025-11-24 21:53:10.135100983 +0000 UTC m=+873.052084355" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.151392 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4sfq7" podStartSLOduration=3.319091184 podStartE2EDuration="25.151373934s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.386598803 +0000 UTC m=+850.303582175" lastFinishedPulling="2025-11-24 21:53:09.218881553 +0000 UTC m=+872.135864925" observedRunningTime="2025-11-24 21:53:10.147494794 +0000 UTC m=+873.064478166" watchObservedRunningTime="2025-11-24 21:53:10.151373934 +0000 UTC m=+873.068357306" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.170057 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" podStartSLOduration=3.337426324 podStartE2EDuration="25.170038903s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.386208102 +0000 UTC m=+850.303191474" lastFinishedPulling="2025-11-24 21:53:09.218820681 +0000 UTC m=+872.135804053" observedRunningTime="2025-11-24 21:53:10.166508163 +0000 UTC m=+873.083491535" watchObservedRunningTime="2025-11-24 21:53:10.170038903 +0000 UTC m=+873.087022275" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.181495 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" podStartSLOduration=3.297616305 podStartE2EDuration="25.181478197s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.360302688 +0000 UTC m=+850.277286060" lastFinishedPulling="2025-11-24 21:53:09.24416458 +0000 UTC m=+872.161147952" observedRunningTime="2025-11-24 21:53:10.178540294 +0000 UTC m=+873.095523666" watchObservedRunningTime="2025-11-24 21:53:10.181478197 +0000 UTC m=+873.098461569" Nov 24 21:53:10 crc kubenswrapper[4767]: I1124 21:53:10.199502 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" podStartSLOduration=3.376681846 podStartE2EDuration="25.199484787s" podCreationTimestamp="2025-11-24 21:52:45 +0000 UTC" firstStartedPulling="2025-11-24 21:52:47.360064731 +0000 UTC m=+850.277048103" lastFinishedPulling="2025-11-24 21:53:09.182867672 +0000 UTC m=+872.099851044" observedRunningTime="2025-11-24 21:53:10.195929397 +0000 UTC m=+873.112912769" watchObservedRunningTime="2025-11-24 21:53:10.199484787 +0000 UTC m=+873.116468159" Nov 24 21:53:15 crc kubenswrapper[4767]: I1124 21:53:15.796980 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-vdr7z" Nov 24 21:53:15 crc kubenswrapper[4767]: I1124 21:53:15.908954 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-7bvtn" Nov 24 21:53:15 crc kubenswrapper[4767]: I1124 21:53:15.935262 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hpv5b" Nov 24 21:53:15 crc kubenswrapper[4767]: I1124 21:53:15.957885 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-g7fnm" Nov 24 21:53:16 crc kubenswrapper[4767]: I1124 21:53:16.016942 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2nr8k" Nov 24 21:53:16 crc kubenswrapper[4767]: I1124 21:53:16.335966 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5c96f79b7c-4msp7" Nov 24 21:53:18 crc kubenswrapper[4767]: I1124 21:53:18.325967 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:53:19 crc kubenswrapper[4767]: I1124 21:53:19.199665 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" event={"ID":"e8cfe9d6-3aba-44af-9dbc-679d34dc98d0","Type":"ContainerStarted","Data":"205b0df4de5671b0935a5cc165c52a3ec80dd9c7e07e35dbdf77076bb3551e40"} Nov 24 21:53:19 crc kubenswrapper[4767]: I1124 21:53:19.200488 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" Nov 24 21:53:19 crc kubenswrapper[4767]: I1124 21:53:19.222213 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" podStartSLOduration=3.288916235 podStartE2EDuration="35.222193856s" podCreationTimestamp="2025-11-24 21:52:44 +0000 UTC" firstStartedPulling="2025-11-24 21:52:46.843953613 +0000 UTC m=+849.760936975" lastFinishedPulling="2025-11-24 21:53:18.777231214 +0000 UTC m=+881.694214596" observedRunningTime="2025-11-24 21:53:19.218151401 +0000 UTC m=+882.135134773" watchObservedRunningTime="2025-11-24 21:53:19.222193856 +0000 UTC m=+882.139177248" Nov 24 21:53:28 crc kubenswrapper[4767]: I1124 21:53:25.323783 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-ns9km" Nov 24 21:53:35 crc kubenswrapper[4767]: I1124 21:53:35.482020 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:53:35 crc kubenswrapper[4767]: I1124 21:53:35.482706 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.918440 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5dl6n"] Nov 24 21:53:44 crc kubenswrapper[4767]: E1124 21:53:44.919238 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="extract-content" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.919251 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="extract-content" Nov 24 21:53:44 crc kubenswrapper[4767]: E1124 21:53:44.919294 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="extract-utilities" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.919303 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="extract-utilities" Nov 24 21:53:44 crc kubenswrapper[4767]: E1124 21:53:44.919319 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="registry-server" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.919325 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="registry-server" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.919492 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="76272833-44ed-4e2f-b20f-1479146df875" containerName="registry-server" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.920257 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.925142 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.925526 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.926367 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-67hjg" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.926515 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.926809 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5dl6n"] Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.984523 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da499274-2ce5-4d67-b8a2-a85b93782ec0-config\") pod \"dnsmasq-dns-675f4bcbfc-5dl6n\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.984651 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjc6\" (UniqueName: \"kubernetes.io/projected/da499274-2ce5-4d67-b8a2-a85b93782ec0-kube-api-access-5xjc6\") pod \"dnsmasq-dns-675f4bcbfc-5dl6n\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.997694 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2xfrn"] Nov 24 21:53:44 crc kubenswrapper[4767]: I1124 21:53:44.999208 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.002128 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.007252 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2xfrn"] Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.085916 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-config\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.085963 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da499274-2ce5-4d67-b8a2-a85b93782ec0-config\") pod \"dnsmasq-dns-675f4bcbfc-5dl6n\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.085985 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd5hf\" (UniqueName: \"kubernetes.io/projected/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-kube-api-access-vd5hf\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.086029 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.086052 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjc6\" (UniqueName: \"kubernetes.io/projected/da499274-2ce5-4d67-b8a2-a85b93782ec0-kube-api-access-5xjc6\") pod \"dnsmasq-dns-675f4bcbfc-5dl6n\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.086962 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da499274-2ce5-4d67-b8a2-a85b93782ec0-config\") pod \"dnsmasq-dns-675f4bcbfc-5dl6n\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.107060 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjc6\" (UniqueName: \"kubernetes.io/projected/da499274-2ce5-4d67-b8a2-a85b93782ec0-kube-api-access-5xjc6\") pod \"dnsmasq-dns-675f4bcbfc-5dl6n\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.186930 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.187032 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-config\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.187053 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd5hf\" (UniqueName: \"kubernetes.io/projected/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-kube-api-access-vd5hf\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.187918 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.187948 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-config\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.209765 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd5hf\" (UniqueName: \"kubernetes.io/projected/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-kube-api-access-vd5hf\") pod \"dnsmasq-dns-78dd6ddcc-2xfrn\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.243599 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.312980 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.732115 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5dl6n"] Nov 24 21:53:45 crc kubenswrapper[4767]: W1124 21:53:45.734043 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda499274_2ce5_4d67_b8a2_a85b93782ec0.slice/crio-de79b994aa3662aa7befde66a18ccbbf5ccd2f6765695e9105910a2101a48a4e WatchSource:0}: Error finding container de79b994aa3662aa7befde66a18ccbbf5ccd2f6765695e9105910a2101a48a4e: Status 404 returned error can't find the container with id de79b994aa3662aa7befde66a18ccbbf5ccd2f6765695e9105910a2101a48a4e Nov 24 21:53:45 crc kubenswrapper[4767]: W1124 21:53:45.785580 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8687aa6b_87dd_436e_ad82_c3ecfc0ff82c.slice/crio-648d940ab0b32877d68c808f0b982ae334d3a66da63a5b083ff4c3a50780190e WatchSource:0}: Error finding container 648d940ab0b32877d68c808f0b982ae334d3a66da63a5b083ff4c3a50780190e: Status 404 returned error can't find the container with id 648d940ab0b32877d68c808f0b982ae334d3a66da63a5b083ff4c3a50780190e Nov 24 21:53:45 crc kubenswrapper[4767]: I1124 21:53:45.786030 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2xfrn"] Nov 24 21:53:46 crc kubenswrapper[4767]: I1124 21:53:46.501802 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" event={"ID":"da499274-2ce5-4d67-b8a2-a85b93782ec0","Type":"ContainerStarted","Data":"de79b994aa3662aa7befde66a18ccbbf5ccd2f6765695e9105910a2101a48a4e"} Nov 24 21:53:46 crc kubenswrapper[4767]: I1124 21:53:46.504498 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" event={"ID":"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c","Type":"ContainerStarted","Data":"648d940ab0b32877d68c808f0b982ae334d3a66da63a5b083ff4c3a50780190e"} Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.025278 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5dl6n"] Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.055308 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-bfk54"] Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.057139 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.077122 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-bfk54"] Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.127399 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-config\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.127474 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.127536 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhlw\" (UniqueName: \"kubernetes.io/projected/7932e662-ab03-4bd6-b360-a21c21c93f1a-kube-api-access-6hhlw\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.228909 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hhlw\" (UniqueName: \"kubernetes.io/projected/7932e662-ab03-4bd6-b360-a21c21c93f1a-kube-api-access-6hhlw\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.228981 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-config\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.229025 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.229921 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.229991 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-config\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.269614 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hhlw\" (UniqueName: \"kubernetes.io/projected/7932e662-ab03-4bd6-b360-a21c21c93f1a-kube-api-access-6hhlw\") pod \"dnsmasq-dns-5ccc8479f9-bfk54\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.373585 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.383731 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2xfrn"] Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.423256 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tnc5n"] Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.428762 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.430909 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tnc5n"] Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.547102 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.547156 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-config\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.547193 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cxv\" (UniqueName: \"kubernetes.io/projected/30dd5ae5-2f8f-459e-9790-fc964f69e624-kube-api-access-t8cxv\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.648251 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-config\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.648307 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8cxv\" (UniqueName: \"kubernetes.io/projected/30dd5ae5-2f8f-459e-9790-fc964f69e624-kube-api-access-t8cxv\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.648408 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.649168 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-config\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.649189 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.670143 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8cxv\" (UniqueName: \"kubernetes.io/projected/30dd5ae5-2f8f-459e-9790-fc964f69e624-kube-api-access-t8cxv\") pod \"dnsmasq-dns-57d769cc4f-tnc5n\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:48 crc kubenswrapper[4767]: I1124 21:53:48.748228 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.262173 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.268077 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.269084 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.270408 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.270613 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.270620 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sr5cr" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.270665 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.270716 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.270938 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.271081 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.374555 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.374613 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.374640 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4x2\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-kube-api-access-dn4x2\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.374754 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.375108 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.375237 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.375631 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.375674 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.375716 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.375805 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.375841 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477212 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477304 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477341 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477366 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477391 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477421 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477456 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477483 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477683 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.478300 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.477768 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.478399 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn4x2\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-kube-api-access-dn4x2\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.478474 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.478823 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.479050 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.479134 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.480850 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.485489 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.486970 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.488486 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.489425 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.498165 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.510590 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn4x2\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-kube-api-access-dn4x2\") pod \"rabbitmq-cell1-server-0\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.553106 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.554527 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.556650 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.557710 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.558092 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.558251 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.558940 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.559115 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vm78g" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.559418 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.566392 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.579874 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.579934 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30d319c1-5268-413c-a6db-9d376a2217c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.579983 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.580008 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.588593 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.588678 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55b9\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-kube-api-access-h55b9\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.588725 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.588775 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.588903 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.589000 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30d319c1-5268-413c-a6db-9d376a2217c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.589040 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.594245 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.689811 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.689871 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55b9\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-kube-api-access-h55b9\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.689900 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.689923 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.689965 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.689994 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30d319c1-5268-413c-a6db-9d376a2217c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.690033 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.690061 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.690077 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30d319c1-5268-413c-a6db-9d376a2217c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.690123 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.690141 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.690552 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.696294 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.696697 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.697238 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30d319c1-5268-413c-a6db-9d376a2217c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.700901 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.701252 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.708077 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.708145 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30d319c1-5268-413c-a6db-9d376a2217c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.709047 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.709661 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.712081 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55b9\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-kube-api-access-h55b9\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.712853 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " pod="openstack/rabbitmq-server-0" Nov 24 21:53:49 crc kubenswrapper[4767]: I1124 21:53:49.877365 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.123004 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.140632 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.147874 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.148223 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.148680 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5vjwz" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.149803 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.153972 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.159347 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.315851 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2dc17c-c088-4182-8695-1c09ee22aa06-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.315922 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.315946 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg4k\" (UniqueName: \"kubernetes.io/projected/3e2dc17c-c088-4182-8695-1c09ee22aa06-kube-api-access-blg4k\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.316006 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2dc17c-c088-4182-8695-1c09ee22aa06-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.316028 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e2dc17c-c088-4182-8695-1c09ee22aa06-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.316046 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.316112 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.316128 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417351 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2dc17c-c088-4182-8695-1c09ee22aa06-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417412 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e2dc17c-c088-4182-8695-1c09ee22aa06-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417443 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417491 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417511 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417567 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2dc17c-c088-4182-8695-1c09ee22aa06-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417597 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.417622 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg4k\" (UniqueName: \"kubernetes.io/projected/3e2dc17c-c088-4182-8695-1c09ee22aa06-kube-api-access-blg4k\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.418462 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e2dc17c-c088-4182-8695-1c09ee22aa06-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.418620 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.419054 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.419886 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.420158 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e2dc17c-c088-4182-8695-1c09ee22aa06-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.424924 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2dc17c-c088-4182-8695-1c09ee22aa06-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.425548 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2dc17c-c088-4182-8695-1c09ee22aa06-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.448174 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg4k\" (UniqueName: \"kubernetes.io/projected/3e2dc17c-c088-4182-8695-1c09ee22aa06-kube-api-access-blg4k\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.458236 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3e2dc17c-c088-4182-8695-1c09ee22aa06\") " pod="openstack/openstack-galera-0" Nov 24 21:53:51 crc kubenswrapper[4767]: I1124 21:53:51.474010 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.517454 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.519339 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.521972 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.522229 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.522247 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bcl7m" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.522297 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.529925 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.633958 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5a55be5-98af-48c4-800f-1595cb7e1959-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.634029 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a55be5-98af-48c4-800f-1595cb7e1959-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.634093 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a55be5-98af-48c4-800f-1595cb7e1959-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.634161 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.634188 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.634239 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.634362 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkgsx\" (UniqueName: \"kubernetes.io/projected/b5a55be5-98af-48c4-800f-1595cb7e1959-kube-api-access-bkgsx\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.634426 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.670550 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.671746 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.673739 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.675189 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hxv59" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.675794 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.679167 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.735692 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkgsx\" (UniqueName: \"kubernetes.io/projected/b5a55be5-98af-48c4-800f-1595cb7e1959-kube-api-access-bkgsx\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.735779 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.735821 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5a55be5-98af-48c4-800f-1595cb7e1959-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.735857 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a55be5-98af-48c4-800f-1595cb7e1959-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.735907 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a55be5-98af-48c4-800f-1595cb7e1959-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.735946 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.735975 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.736002 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.736219 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5a55be5-98af-48c4-800f-1595cb7e1959-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.736241 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.736789 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.736872 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.737378 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5a55be5-98af-48c4-800f-1595cb7e1959-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.743074 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a55be5-98af-48c4-800f-1595cb7e1959-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.743615 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a55be5-98af-48c4-800f-1595cb7e1959-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.754351 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkgsx\" (UniqueName: \"kubernetes.io/projected/b5a55be5-98af-48c4-800f-1595cb7e1959-kube-api-access-bkgsx\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.764808 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5a55be5-98af-48c4-800f-1595cb7e1959\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.838139 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94f1692-e48b-43d8-9694-1d54ba3e8f41-config-data\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.838229 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c94f1692-e48b-43d8-9694-1d54ba3e8f41-kolla-config\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.838305 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v47b9\" (UniqueName: \"kubernetes.io/projected/c94f1692-e48b-43d8-9694-1d54ba3e8f41-kube-api-access-v47b9\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.838405 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94f1692-e48b-43d8-9694-1d54ba3e8f41-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.838432 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c94f1692-e48b-43d8-9694-1d54ba3e8f41-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.842496 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.940650 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c94f1692-e48b-43d8-9694-1d54ba3e8f41-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.940705 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94f1692-e48b-43d8-9694-1d54ba3e8f41-config-data\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.940751 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c94f1692-e48b-43d8-9694-1d54ba3e8f41-kolla-config\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.940782 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v47b9\" (UniqueName: \"kubernetes.io/projected/c94f1692-e48b-43d8-9694-1d54ba3e8f41-kube-api-access-v47b9\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.940843 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94f1692-e48b-43d8-9694-1d54ba3e8f41-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.941563 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c94f1692-e48b-43d8-9694-1d54ba3e8f41-kolla-config\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.942049 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c94f1692-e48b-43d8-9694-1d54ba3e8f41-config-data\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.947690 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c94f1692-e48b-43d8-9694-1d54ba3e8f41-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.955240 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94f1692-e48b-43d8-9694-1d54ba3e8f41-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.956331 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v47b9\" (UniqueName: \"kubernetes.io/projected/c94f1692-e48b-43d8-9694-1d54ba3e8f41-kube-api-access-v47b9\") pod \"memcached-0\" (UID: \"c94f1692-e48b-43d8-9694-1d54ba3e8f41\") " pod="openstack/memcached-0" Nov 24 21:53:52 crc kubenswrapper[4767]: I1124 21:53:52.994887 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 21:53:54 crc kubenswrapper[4767]: I1124 21:53:54.754022 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:53:54 crc kubenswrapper[4767]: I1124 21:53:54.755174 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:53:54 crc kubenswrapper[4767]: I1124 21:53:54.757506 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zjh7p" Nov 24 21:53:54 crc kubenswrapper[4767]: I1124 21:53:54.770823 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:53:54 crc kubenswrapper[4767]: I1124 21:53:54.867530 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdk27\" (UniqueName: \"kubernetes.io/projected/dc8b7b67-1318-4978-880f-125741025c39-kube-api-access-gdk27\") pod \"kube-state-metrics-0\" (UID: \"dc8b7b67-1318-4978-880f-125741025c39\") " pod="openstack/kube-state-metrics-0" Nov 24 21:53:54 crc kubenswrapper[4767]: I1124 21:53:54.968578 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdk27\" (UniqueName: \"kubernetes.io/projected/dc8b7b67-1318-4978-880f-125741025c39-kube-api-access-gdk27\") pod \"kube-state-metrics-0\" (UID: \"dc8b7b67-1318-4978-880f-125741025c39\") " pod="openstack/kube-state-metrics-0" Nov 24 21:53:55 crc kubenswrapper[4767]: I1124 21:53:55.017612 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdk27\" (UniqueName: \"kubernetes.io/projected/dc8b7b67-1318-4978-880f-125741025c39-kube-api-access-gdk27\") pod \"kube-state-metrics-0\" (UID: \"dc8b7b67-1318-4978-880f-125741025c39\") " pod="openstack/kube-state-metrics-0" Nov 24 21:53:55 crc kubenswrapper[4767]: I1124 21:53:55.077849 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.112648 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.114629 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.118041 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.118134 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.118184 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.118352 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.118407 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-mmxtp" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.127305 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.128481 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.288831 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9fa46701-7516-4376-a72b-10c3eca271f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.288878 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fa46701-7516-4376-a72b-10c3eca271f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.288914 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.288933 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.288955 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.288996 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.289013 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb99v\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-kube-api-access-hb99v\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.289045 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391536 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9fa46701-7516-4376-a72b-10c3eca271f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391588 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fa46701-7516-4376-a72b-10c3eca271f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391621 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391639 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391658 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391701 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391722 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb99v\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-kube-api-access-hb99v\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.391754 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.392581 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9fa46701-7516-4376-a72b-10c3eca271f8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.395405 4767 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.395436 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b4c963982fee8444440b339c0b04b674e3a0c1d34dde87d25887f0d341e5df1/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.396574 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fa46701-7516-4376-a72b-10c3eca271f8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.396618 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-config\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.397702 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.398553 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.400632 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.416414 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb99v\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-kube-api-access-hb99v\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.433564 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:56 crc kubenswrapper[4767]: I1124 21:53:56.454367 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.817920 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ngft4"] Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.819692 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.821829 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-68kdk" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.822019 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.826706 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.838387 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ngft4"] Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.885609 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6bq9m"] Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.887132 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.900100 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6bq9m"] Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.917724 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e7e218a-3550-499e-8337-5940f98af41c-scripts\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.917770 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-log-ovn\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.917790 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mzl\" (UniqueName: \"kubernetes.io/projected/6e7e218a-3550-499e-8337-5940f98af41c-kube-api-access-98mzl\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.917820 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-run-ovn\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.917843 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-run\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.917878 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7e218a-3550-499e-8337-5940f98af41c-combined-ca-bundle\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:57 crc kubenswrapper[4767]: I1124 21:53:57.917894 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7e218a-3550-499e-8337-5940f98af41c-ovn-controller-tls-certs\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019014 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e7e218a-3550-499e-8337-5940f98af41c-scripts\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019052 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-log-ovn\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019099 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-log\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019118 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7e218a-3550-499e-8337-5940f98af41c-combined-ca-bundle\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019134 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7e218a-3550-499e-8337-5940f98af41c-ovn-controller-tls-certs\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019156 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b6l9\" (UniqueName: \"kubernetes.io/projected/336d57cd-046c-436a-a596-69890001522f-kube-api-access-6b6l9\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019177 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-run\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019221 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d57cd-046c-436a-a596-69890001522f-scripts\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019237 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-etc-ovs\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019259 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98mzl\" (UniqueName: \"kubernetes.io/projected/6e7e218a-3550-499e-8337-5940f98af41c-kube-api-access-98mzl\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019612 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-lib\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019674 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-run-ovn\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019712 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-run\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.019999 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-log-ovn\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.020155 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-run-ovn\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.020194 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e7e218a-3550-499e-8337-5940f98af41c-var-run\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.021776 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e7e218a-3550-499e-8337-5940f98af41c-scripts\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.023232 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e7e218a-3550-499e-8337-5940f98af41c-ovn-controller-tls-certs\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.023410 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e7e218a-3550-499e-8337-5940f98af41c-combined-ca-bundle\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.034298 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mzl\" (UniqueName: \"kubernetes.io/projected/6e7e218a-3550-499e-8337-5940f98af41c-kube-api-access-98mzl\") pod \"ovn-controller-ngft4\" (UID: \"6e7e218a-3550-499e-8337-5940f98af41c\") " pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.121693 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-log\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.121755 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b6l9\" (UniqueName: \"kubernetes.io/projected/336d57cd-046c-436a-a596-69890001522f-kube-api-access-6b6l9\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.121785 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-run\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.121825 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d57cd-046c-436a-a596-69890001522f-scripts\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.121850 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-etc-ovs\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.121902 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-lib\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.122347 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-log\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.122400 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-lib\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.122777 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-etc-ovs\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.122845 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336d57cd-046c-436a-a596-69890001522f-var-run\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.124880 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d57cd-046c-436a-a596-69890001522f-scripts\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.139840 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.144152 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b6l9\" (UniqueName: \"kubernetes.io/projected/336d57cd-046c-436a-a596-69890001522f-kube-api-access-6b6l9\") pod \"ovn-controller-ovs-6bq9m\" (UID: \"336d57cd-046c-436a-a596-69890001522f\") " pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:58 crc kubenswrapper[4767]: I1124 21:53:58.217809 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.272610 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.274537 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.276309 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.280542 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.280545 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.280666 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wk4nb" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.280761 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.289585 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451061 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4814045f-5f97-427e-a1bb-3aa438fc2e5d-config\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451400 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451431 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwwl\" (UniqueName: \"kubernetes.io/projected/4814045f-5f97-427e-a1bb-3aa438fc2e5d-kube-api-access-4wwwl\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451467 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451488 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4814045f-5f97-427e-a1bb-3aa438fc2e5d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451603 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4814045f-5f97-427e-a1bb-3aa438fc2e5d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451639 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.451668 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553082 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4814045f-5f97-427e-a1bb-3aa438fc2e5d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553159 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553214 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553283 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4814045f-5f97-427e-a1bb-3aa438fc2e5d-config\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553309 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553334 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wwwl\" (UniqueName: \"kubernetes.io/projected/4814045f-5f97-427e-a1bb-3aa438fc2e5d-kube-api-access-4wwwl\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553360 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553384 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4814045f-5f97-427e-a1bb-3aa438fc2e5d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.553728 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4814045f-5f97-427e-a1bb-3aa438fc2e5d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.554145 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.554371 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4814045f-5f97-427e-a1bb-3aa438fc2e5d-config\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.554720 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4814045f-5f97-427e-a1bb-3aa438fc2e5d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.561038 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.561382 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.573890 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814045f-5f97-427e-a1bb-3aa438fc2e5d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.577517 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wwwl\" (UniqueName: \"kubernetes.io/projected/4814045f-5f97-427e-a1bb-3aa438fc2e5d-kube-api-access-4wwwl\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.583733 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4814045f-5f97-427e-a1bb-3aa438fc2e5d\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.599368 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 21:53:59 crc kubenswrapper[4767]: I1124 21:53:59.777149 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:54:00 crc kubenswrapper[4767]: E1124 21:54:00.281712 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:54:00 crc kubenswrapper[4767]: E1124 21:54:00.283077 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xjc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-5dl6n_openstack(da499274-2ce5-4d67-b8a2-a85b93782ec0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:54:00 crc kubenswrapper[4767]: E1124 21:54:00.284630 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" podUID="da499274-2ce5-4d67-b8a2-a85b93782ec0" Nov 24 21:54:00 crc kubenswrapper[4767]: E1124 21:54:00.297807 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:54:00 crc kubenswrapper[4767]: E1124 21:54:00.297979 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vd5hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2xfrn_openstack(8687aa6b-87dd-436e-ad82-c3ecfc0ff82c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:54:00 crc kubenswrapper[4767]: E1124 21:54:00.299284 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" podUID="8687aa6b-87dd-436e-ad82-c3ecfc0ff82c" Nov 24 21:54:00 crc kubenswrapper[4767]: I1124 21:54:00.631608 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30d319c1-5268-413c-a6db-9d376a2217c3","Type":"ContainerStarted","Data":"2c53137b58038ccef7db7ddc96408373ddde24b62180630098a4d43a1853501f"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.110517 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.118233 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.138372 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.140686 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:54:01 crc kubenswrapper[4767]: W1124 21:54:01.143390 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc94f1692_e48b_43d8_9694_1d54ba3e8f41.slice/crio-b09a3603b01d99a0f56647b13ebaf256e4fd1fc160d9cd4669abd05ee88b392a WatchSource:0}: Error finding container b09a3603b01d99a0f56647b13ebaf256e4fd1fc160d9cd4669abd05ee88b392a: Status 404 returned error can't find the container with id b09a3603b01d99a0f56647b13ebaf256e4fd1fc160d9cd4669abd05ee88b392a Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.153329 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-bfk54"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.164698 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tnc5n"] Nov 24 21:54:01 crc kubenswrapper[4767]: W1124 21:54:01.169977 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7932e662_ab03_4bd6_b360_a21c21c93f1a.slice/crio-bc28ed509cd7a4d5c33743c04d960b33f60362f9b1e9b980bd82edfd6dc68051 WatchSource:0}: Error finding container bc28ed509cd7a4d5c33743c04d960b33f60362f9b1e9b980bd82edfd6dc68051: Status 404 returned error can't find the container with id bc28ed509cd7a4d5c33743c04d960b33f60362f9b1e9b980bd82edfd6dc68051 Nov 24 21:54:01 crc kubenswrapper[4767]: W1124 21:54:01.176297 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc8b7b67_1318_4978_880f_125741025c39.slice/crio-f8542530ff562b5bf38676154725639a47832fa1f5e859906f3a4883b3066895 WatchSource:0}: Error finding container f8542530ff562b5bf38676154725639a47832fa1f5e859906f3a4883b3066895: Status 404 returned error can't find the container with id f8542530ff562b5bf38676154725639a47832fa1f5e859906f3a4883b3066895 Nov 24 21:54:01 crc kubenswrapper[4767]: W1124 21:54:01.187896 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30dd5ae5_2f8f_459e_9790_fc964f69e624.slice/crio-c6fd7993198054ae61886c583d0dee2a12bdd0e57d6e210c325ebe9769e443d9 WatchSource:0}: Error finding container c6fd7993198054ae61886c583d0dee2a12bdd0e57d6e210c325ebe9769e443d9: Status 404 returned error can't find the container with id c6fd7993198054ae61886c583d0dee2a12bdd0e57d6e210c325ebe9769e443d9 Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.188468 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.293801 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da499274-2ce5-4d67-b8a2-a85b93782ec0-config\") pod \"da499274-2ce5-4d67-b8a2-a85b93782ec0\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.294028 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xjc6\" (UniqueName: \"kubernetes.io/projected/da499274-2ce5-4d67-b8a2-a85b93782ec0-kube-api-access-5xjc6\") pod \"da499274-2ce5-4d67-b8a2-a85b93782ec0\" (UID: \"da499274-2ce5-4d67-b8a2-a85b93782ec0\") " Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.294095 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd5hf\" (UniqueName: \"kubernetes.io/projected/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-kube-api-access-vd5hf\") pod \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.294190 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-dns-svc\") pod \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.294233 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-config\") pod \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\" (UID: \"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c\") " Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.295517 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-config" (OuterVolumeSpecName: "config") pod "8687aa6b-87dd-436e-ad82-c3ecfc0ff82c" (UID: "8687aa6b-87dd-436e-ad82-c3ecfc0ff82c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.295902 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da499274-2ce5-4d67-b8a2-a85b93782ec0-config" (OuterVolumeSpecName: "config") pod "da499274-2ce5-4d67-b8a2-a85b93782ec0" (UID: "da499274-2ce5-4d67-b8a2-a85b93782ec0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.296144 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8687aa6b-87dd-436e-ad82-c3ecfc0ff82c" (UID: "8687aa6b-87dd-436e-ad82-c3ecfc0ff82c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.302413 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da499274-2ce5-4d67-b8a2-a85b93782ec0-kube-api-access-5xjc6" (OuterVolumeSpecName: "kube-api-access-5xjc6") pod "da499274-2ce5-4d67-b8a2-a85b93782ec0" (UID: "da499274-2ce5-4d67-b8a2-a85b93782ec0"). InnerVolumeSpecName "kube-api-access-5xjc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.302493 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-kube-api-access-vd5hf" (OuterVolumeSpecName: "kube-api-access-vd5hf") pod "8687aa6b-87dd-436e-ad82-c3ecfc0ff82c" (UID: "8687aa6b-87dd-436e-ad82-c3ecfc0ff82c"). InnerVolumeSpecName "kube-api-access-vd5hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.396578 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd5hf\" (UniqueName: \"kubernetes.io/projected/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-kube-api-access-vd5hf\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.396602 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.396611 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.396621 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da499274-2ce5-4d67-b8a2-a85b93782ec0-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.396629 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xjc6\" (UniqueName: \"kubernetes.io/projected/da499274-2ce5-4d67-b8a2-a85b93782ec0-kube-api-access-5xjc6\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.405926 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.418487 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.445118 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ngft4"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.486098 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.574255 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6bq9m"] Nov 24 21:54:01 crc kubenswrapper[4767]: W1124 21:54:01.582985 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod336d57cd_046c_436a_a596_69890001522f.slice/crio-67ef7bc47417b8b63cca8387f1d4b3778c3865996ba49e7a812f9d6b65280572 WatchSource:0}: Error finding container 67ef7bc47417b8b63cca8387f1d4b3778c3865996ba49e7a812f9d6b65280572: Status 404 returned error can't find the container with id 67ef7bc47417b8b63cca8387f1d4b3778c3865996ba49e7a812f9d6b65280572 Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.641729 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" event={"ID":"da499274-2ce5-4d67-b8a2-a85b93782ec0","Type":"ContainerDied","Data":"de79b994aa3662aa7befde66a18ccbbf5ccd2f6765695e9105910a2101a48a4e"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.641777 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5dl6n" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.644341 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc8b7b67-1318-4978-880f-125741025c39","Type":"ContainerStarted","Data":"f8542530ff562b5bf38676154725639a47832fa1f5e859906f3a4883b3066895"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.648007 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" event={"ID":"30dd5ae5-2f8f-459e-9790-fc964f69e624","Type":"ContainerStarted","Data":"c6fd7993198054ae61886c583d0dee2a12bdd0e57d6e210c325ebe9769e443d9"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.651174 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" event={"ID":"7932e662-ab03-4bd6-b360-a21c21c93f1a","Type":"ContainerStarted","Data":"bc28ed509cd7a4d5c33743c04d960b33f60362f9b1e9b980bd82edfd6dc68051"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.652169 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5c433e97-140e-43fe-aa7b-1bd14d9e78b9","Type":"ContainerStarted","Data":"77b1ca3b5a49c3d4c8e416f578aeac14ca4839406d1ca3a3b811652114234ab3"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.654016 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4" event={"ID":"6e7e218a-3550-499e-8337-5940f98af41c","Type":"ContainerStarted","Data":"06010ddbbe582e43bd18056ffed606f7460e5b6dedff48f5d0439a15bcee0f25"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.656305 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6bq9m" event={"ID":"336d57cd-046c-436a-a596-69890001522f","Type":"ContainerStarted","Data":"67ef7bc47417b8b63cca8387f1d4b3778c3865996ba49e7a812f9d6b65280572"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.658080 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5a55be5-98af-48c4-800f-1595cb7e1959","Type":"ContainerStarted","Data":"2579f714a6ae4c5e1fe27db84b5e2d34739d8b1431314eb20aafb2acb41c9c85"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.659987 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e2dc17c-c088-4182-8695-1c09ee22aa06","Type":"ContainerStarted","Data":"65eeb69d6326e50cb7aa9068363bd3465e98bd70f402cef2e375cb001d16372c"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.661566 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerStarted","Data":"4c989c1fd50ece380f889af23b497792f31c5e8e5470776034acbf2c2bcb9e28"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.665656 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c94f1692-e48b-43d8-9694-1d54ba3e8f41","Type":"ContainerStarted","Data":"b09a3603b01d99a0f56647b13ebaf256e4fd1fc160d9cd4669abd05ee88b392a"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.667916 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" event={"ID":"8687aa6b-87dd-436e-ad82-c3ecfc0ff82c","Type":"ContainerDied","Data":"648d940ab0b32877d68c808f0b982ae334d3a66da63a5b083ff4c3a50780190e"} Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.668018 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2xfrn" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.759047 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5dl6n"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.783634 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5dl6n"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.837659 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2xfrn"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.844081 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2xfrn"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.867413 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ntjb8"] Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.871135 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.875678 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 21:54:01 crc kubenswrapper[4767]: I1124 21:54:01.887354 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ntjb8"] Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.009462 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6w5d\" (UniqueName: \"kubernetes.io/projected/b359e7d5-b708-4bf2-9017-48099ff8e287-kube-api-access-n6w5d\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.009515 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b359e7d5-b708-4bf2-9017-48099ff8e287-combined-ca-bundle\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.009533 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b359e7d5-b708-4bf2-9017-48099ff8e287-ovs-rundir\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.009575 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b359e7d5-b708-4bf2-9017-48099ff8e287-ovn-rundir\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.009601 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b359e7d5-b708-4bf2-9017-48099ff8e287-config\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.009670 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b359e7d5-b708-4bf2-9017-48099ff8e287-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.110907 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6w5d\" (UniqueName: \"kubernetes.io/projected/b359e7d5-b708-4bf2-9017-48099ff8e287-kube-api-access-n6w5d\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.110973 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b359e7d5-b708-4bf2-9017-48099ff8e287-combined-ca-bundle\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.110996 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b359e7d5-b708-4bf2-9017-48099ff8e287-ovs-rundir\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.111047 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b359e7d5-b708-4bf2-9017-48099ff8e287-ovn-rundir\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.111077 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b359e7d5-b708-4bf2-9017-48099ff8e287-config\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.111156 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b359e7d5-b708-4bf2-9017-48099ff8e287-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.111422 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b359e7d5-b708-4bf2-9017-48099ff8e287-ovn-rundir\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.111425 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b359e7d5-b708-4bf2-9017-48099ff8e287-ovs-rundir\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.112397 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b359e7d5-b708-4bf2-9017-48099ff8e287-config\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.116115 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b359e7d5-b708-4bf2-9017-48099ff8e287-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.116277 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b359e7d5-b708-4bf2-9017-48099ff8e287-combined-ca-bundle\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.129733 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6w5d\" (UniqueName: \"kubernetes.io/projected/b359e7d5-b708-4bf2-9017-48099ff8e287-kube-api-access-n6w5d\") pod \"ovn-controller-metrics-ntjb8\" (UID: \"b359e7d5-b708-4bf2-9017-48099ff8e287\") " pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.202648 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.208157 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ntjb8" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.325567 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8687aa6b-87dd-436e-ad82-c3ecfc0ff82c" path="/var/lib/kubelet/pods/8687aa6b-87dd-436e-ad82-c3ecfc0ff82c/volumes" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.326202 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da499274-2ce5-4d67-b8a2-a85b93782ec0" path="/var/lib/kubelet/pods/da499274-2ce5-4d67-b8a2-a85b93782ec0/volumes" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.504371 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.505959 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.508585 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.508978 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-s76jm" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.509107 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.509371 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.517135 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.620491 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.620563 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.620596 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.621058 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a77426c-8a5f-427c-accc-fa0de1270f9c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.621156 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a77426c-8a5f-427c-accc-fa0de1270f9c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.621281 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.621411 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvq6\" (UniqueName: \"kubernetes.io/projected/7a77426c-8a5f-427c-accc-fa0de1270f9c-kube-api-access-mdvq6\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.621519 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a77426c-8a5f-427c-accc-fa0de1270f9c-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.723632 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdvq6\" (UniqueName: \"kubernetes.io/projected/7a77426c-8a5f-427c-accc-fa0de1270f9c-kube-api-access-mdvq6\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.723726 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a77426c-8a5f-427c-accc-fa0de1270f9c-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.723792 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.723813 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.723836 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.723853 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a77426c-8a5f-427c-accc-fa0de1270f9c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.723879 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a77426c-8a5f-427c-accc-fa0de1270f9c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.724621 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.724820 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a77426c-8a5f-427c-accc-fa0de1270f9c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.724918 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a77426c-8a5f-427c-accc-fa0de1270f9c-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.726203 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a77426c-8a5f-427c-accc-fa0de1270f9c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.726675 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.728810 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.729012 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.731194 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a77426c-8a5f-427c-accc-fa0de1270f9c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.743439 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdvq6\" (UniqueName: \"kubernetes.io/projected/7a77426c-8a5f-427c-accc-fa0de1270f9c-kube-api-access-mdvq6\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.770219 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a77426c-8a5f-427c-accc-fa0de1270f9c\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:02 crc kubenswrapper[4767]: I1124 21:54:02.833065 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:03 crc kubenswrapper[4767]: W1124 21:54:03.465656 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4814045f_5f97_427e_a1bb_3aa438fc2e5d.slice/crio-c072580b98daf47348fd1c5ab19841b2fa69c038f13ee3cea547bd5abaad2b6c WatchSource:0}: Error finding container c072580b98daf47348fd1c5ab19841b2fa69c038f13ee3cea547bd5abaad2b6c: Status 404 returned error can't find the container with id c072580b98daf47348fd1c5ab19841b2fa69c038f13ee3cea547bd5abaad2b6c Nov 24 21:54:03 crc kubenswrapper[4767]: I1124 21:54:03.695848 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4814045f-5f97-427e-a1bb-3aa438fc2e5d","Type":"ContainerStarted","Data":"c072580b98daf47348fd1c5ab19841b2fa69c038f13ee3cea547bd5abaad2b6c"} Nov 24 21:54:04 crc kubenswrapper[4767]: I1124 21:54:04.083164 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ntjb8"] Nov 24 21:54:04 crc kubenswrapper[4767]: I1124 21:54:04.148722 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:54:04 crc kubenswrapper[4767]: I1124 21:54:04.703331 4767 generic.go:334] "Generic (PLEG): container finished" podID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerID="053980365c25a434532acfd46d1798fea8654350061b08bced5b22e6d88062cf" exitCode=0 Nov 24 21:54:04 crc kubenswrapper[4767]: I1124 21:54:04.703408 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" event={"ID":"30dd5ae5-2f8f-459e-9790-fc964f69e624","Type":"ContainerDied","Data":"053980365c25a434532acfd46d1798fea8654350061b08bced5b22e6d88062cf"} Nov 24 21:54:04 crc kubenswrapper[4767]: I1124 21:54:04.707704 4767 generic.go:334] "Generic (PLEG): container finished" podID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerID="397e58f003abf94924af029445f6deed4d3850c0384b79a63819a70e9973ce02" exitCode=0 Nov 24 21:54:04 crc kubenswrapper[4767]: I1124 21:54:04.707769 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" event={"ID":"7932e662-ab03-4bd6-b360-a21c21c93f1a","Type":"ContainerDied","Data":"397e58f003abf94924af029445f6deed4d3850c0384b79a63819a70e9973ce02"} Nov 24 21:54:05 crc kubenswrapper[4767]: W1124 21:54:05.275408 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb359e7d5_b708_4bf2_9017_48099ff8e287.slice/crio-3c37c599bf2cac413e7705a1ee0ecbb67736a243f32ea55fa4d021fe4cdc476f WatchSource:0}: Error finding container 3c37c599bf2cac413e7705a1ee0ecbb67736a243f32ea55fa4d021fe4cdc476f: Status 404 returned error can't find the container with id 3c37c599bf2cac413e7705a1ee0ecbb67736a243f32ea55fa4d021fe4cdc476f Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.496126 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.496201 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.496256 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.496939 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e688e489a883e7391dd101f5a5646e7206f88c9971f33a2eee17c7b8ffed628d"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.497009 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://e688e489a883e7391dd101f5a5646e7206f88c9971f33a2eee17c7b8ffed628d" gracePeriod=600 Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.727704 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ntjb8" event={"ID":"b359e7d5-b708-4bf2-9017-48099ff8e287","Type":"ContainerStarted","Data":"3c37c599bf2cac413e7705a1ee0ecbb67736a243f32ea55fa4d021fe4cdc476f"} Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.731031 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="e688e489a883e7391dd101f5a5646e7206f88c9971f33a2eee17c7b8ffed628d" exitCode=0 Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.731061 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"e688e489a883e7391dd101f5a5646e7206f88c9971f33a2eee17c7b8ffed628d"} Nov 24 21:54:05 crc kubenswrapper[4767]: I1124 21:54:05.731090 4767 scope.go:117] "RemoveContainer" containerID="5c376cc0e5d0460b519433b94fced4d0cba810050689003c18c581dd720c940d" Nov 24 21:54:06 crc kubenswrapper[4767]: I1124 21:54:06.741139 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a77426c-8a5f-427c-accc-fa0de1270f9c","Type":"ContainerStarted","Data":"6b7439bdc2511a5e14ae49fc5d8612a8f0842fb242b45107ec48a5f96706920c"} Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.320502 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-bfk54"] Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.355760 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ncctf"] Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.357097 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.367113 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.379074 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ncctf"] Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.436023 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9z7\" (UniqueName: \"kubernetes.io/projected/cde46a15-f2ca-40c6-acc9-963d57fac2cf-kube-api-access-2r9z7\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.436126 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.436162 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.436225 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-config\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.510884 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tnc5n"] Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.537562 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.537612 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.537665 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-config\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.537718 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r9z7\" (UniqueName: \"kubernetes.io/projected/cde46a15-f2ca-40c6-acc9-963d57fac2cf-kube-api-access-2r9z7\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.538558 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.538590 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.538777 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-config\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.550368 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v86z4"] Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.552372 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.555648 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.563311 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r9z7\" (UniqueName: \"kubernetes.io/projected/cde46a15-f2ca-40c6-acc9-963d57fac2cf-kube-api-access-2r9z7\") pod \"dnsmasq-dns-7f896c8c65-ncctf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.569087 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v86z4"] Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.639186 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.639260 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.639313 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshmw\" (UniqueName: \"kubernetes.io/projected/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-kube-api-access-pshmw\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.639343 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.639384 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-config\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.692910 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.740927 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-config\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.741090 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.741158 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.741180 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pshmw\" (UniqueName: \"kubernetes.io/projected/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-kube-api-access-pshmw\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.741204 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.741735 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-config\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.743024 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.743401 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.744225 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.759156 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pshmw\" (UniqueName: \"kubernetes.io/projected/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-kube-api-access-pshmw\") pod \"dnsmasq-dns-86db49b7ff-v86z4\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:07 crc kubenswrapper[4767]: I1124 21:54:07.906819 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:12 crc kubenswrapper[4767]: I1124 21:54:12.533615 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v86z4"] Nov 24 21:54:12 crc kubenswrapper[4767]: W1124 21:54:12.839372 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf1ee997_d0ba_4242_9cf8_58e7ac123d86.slice/crio-5363a4ba9bd1bc79fd14d1346783e5b27cffe3173add6425655e5c9147812311 WatchSource:0}: Error finding container 5363a4ba9bd1bc79fd14d1346783e5b27cffe3173add6425655e5c9147812311: Status 404 returned error can't find the container with id 5363a4ba9bd1bc79fd14d1346783e5b27cffe3173add6425655e5c9147812311 Nov 24 21:54:12 crc kubenswrapper[4767]: I1124 21:54:12.882962 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ncctf"] Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.804589 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" event={"ID":"7932e662-ab03-4bd6-b360-a21c21c93f1a","Type":"ContainerStarted","Data":"fde87114f395280579cf187d2e81c346831f1f3ce71c476d4248b57b11eb84f8"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.805139 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.804734 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" podUID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerName="dnsmasq-dns" containerID="cri-o://fde87114f395280579cf187d2e81c346831f1f3ce71c476d4248b57b11eb84f8" gracePeriod=10 Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.809408 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c94f1692-e48b-43d8-9694-1d54ba3e8f41","Type":"ContainerStarted","Data":"7b1690b630e6dc5e47cfb0c6d9814b75919169bb006fe632667bcbd330d0fb44"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.809481 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.810610 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" event={"ID":"cde46a15-f2ca-40c6-acc9-963d57fac2cf","Type":"ContainerStarted","Data":"cbf40e98d8d73c6638cc3cd36792165154b7253efa5aa2677997df65c2743577"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.812540 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6bq9m" event={"ID":"336d57cd-046c-436a-a596-69890001522f","Type":"ContainerStarted","Data":"63bae47050beecff9664e8bca7824a36b8e84f7ac40360f29d8e49c72be01548"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.813624 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" event={"ID":"cf1ee997-d0ba-4242-9cf8-58e7ac123d86","Type":"ContainerStarted","Data":"5363a4ba9bd1bc79fd14d1346783e5b27cffe3173add6425655e5c9147812311"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.815744 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"b2a57db0a7357f691890d9ae543dd8c8e63ac1b14aa419c6ceaa2fe9ae17ceb2"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.819651 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5a55be5-98af-48c4-800f-1595cb7e1959","Type":"ContainerStarted","Data":"c59deb15e8ac6485719d0e694f30bd6464eff8a5275c71f96c0ce38b29417c31"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.823147 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" event={"ID":"30dd5ae5-2f8f-459e-9790-fc964f69e624","Type":"ContainerStarted","Data":"4650bc3ccc8631f39cd414acc8998cf770eee4c9266cf3dd10b95c194128635d"} Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.823418 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" podUID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerName="dnsmasq-dns" containerID="cri-o://4650bc3ccc8631f39cd414acc8998cf770eee4c9266cf3dd10b95c194128635d" gracePeriod=10 Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.823993 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.831739 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" podStartSLOduration=23.454750796 podStartE2EDuration="25.831716756s" podCreationTimestamp="2025-11-24 21:53:48 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.174281053 +0000 UTC m=+924.091264426" lastFinishedPulling="2025-11-24 21:54:03.551246974 +0000 UTC m=+926.468230386" observedRunningTime="2025-11-24 21:54:13.82937672 +0000 UTC m=+936.746360112" watchObservedRunningTime="2025-11-24 21:54:13.831716756 +0000 UTC m=+936.748700148" Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.870198 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.985468077 podStartE2EDuration="21.870175498s" podCreationTimestamp="2025-11-24 21:53:52 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.148601345 +0000 UTC m=+924.065584717" lastFinishedPulling="2025-11-24 21:54:10.033308746 +0000 UTC m=+932.950292138" observedRunningTime="2025-11-24 21:54:13.864089525 +0000 UTC m=+936.781072917" watchObservedRunningTime="2025-11-24 21:54:13.870175498 +0000 UTC m=+936.787158880" Nov 24 21:54:13 crc kubenswrapper[4767]: I1124 21:54:13.934828 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" podStartSLOduration=23.55251174 podStartE2EDuration="25.934806101s" podCreationTimestamp="2025-11-24 21:53:48 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.199779967 +0000 UTC m=+924.116763339" lastFinishedPulling="2025-11-24 21:54:03.582074318 +0000 UTC m=+926.499057700" observedRunningTime="2025-11-24 21:54:13.932178367 +0000 UTC m=+936.849161809" watchObservedRunningTime="2025-11-24 21:54:13.934806101 +0000 UTC m=+936.851789473" Nov 24 21:54:14 crc kubenswrapper[4767]: I1124 21:54:14.836615 4767 generic.go:334] "Generic (PLEG): container finished" podID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerID="4650bc3ccc8631f39cd414acc8998cf770eee4c9266cf3dd10b95c194128635d" exitCode=0 Nov 24 21:54:14 crc kubenswrapper[4767]: I1124 21:54:14.836680 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" event={"ID":"30dd5ae5-2f8f-459e-9790-fc964f69e624","Type":"ContainerDied","Data":"4650bc3ccc8631f39cd414acc8998cf770eee4c9266cf3dd10b95c194128635d"} Nov 24 21:54:14 crc kubenswrapper[4767]: I1124 21:54:14.841031 4767 generic.go:334] "Generic (PLEG): container finished" podID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerID="fde87114f395280579cf187d2e81c346831f1f3ce71c476d4248b57b11eb84f8" exitCode=0 Nov 24 21:54:14 crc kubenswrapper[4767]: I1124 21:54:14.841082 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" event={"ID":"7932e662-ab03-4bd6-b360-a21c21c93f1a","Type":"ContainerDied","Data":"fde87114f395280579cf187d2e81c346831f1f3ce71c476d4248b57b11eb84f8"} Nov 24 21:54:14 crc kubenswrapper[4767]: I1124 21:54:14.843200 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5c433e97-140e-43fe-aa7b-1bd14d9e78b9","Type":"ContainerStarted","Data":"f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21"} Nov 24 21:54:14 crc kubenswrapper[4767]: I1124 21:54:14.845391 4767 generic.go:334] "Generic (PLEG): container finished" podID="336d57cd-046c-436a-a596-69890001522f" containerID="63bae47050beecff9664e8bca7824a36b8e84f7ac40360f29d8e49c72be01548" exitCode=0 Nov 24 21:54:14 crc kubenswrapper[4767]: I1124 21:54:14.845579 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6bq9m" event={"ID":"336d57cd-046c-436a-a596-69890001522f","Type":"ContainerDied","Data":"63bae47050beecff9664e8bca7824a36b8e84f7ac40360f29d8e49c72be01548"} Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.163242 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.289035 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8cxv\" (UniqueName: \"kubernetes.io/projected/30dd5ae5-2f8f-459e-9790-fc964f69e624-kube-api-access-t8cxv\") pod \"30dd5ae5-2f8f-459e-9790-fc964f69e624\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.289110 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-config\") pod \"30dd5ae5-2f8f-459e-9790-fc964f69e624\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.289182 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-dns-svc\") pod \"30dd5ae5-2f8f-459e-9790-fc964f69e624\" (UID: \"30dd5ae5-2f8f-459e-9790-fc964f69e624\") " Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.331663 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-config" (OuterVolumeSpecName: "config") pod "30dd5ae5-2f8f-459e-9790-fc964f69e624" (UID: "30dd5ae5-2f8f-459e-9790-fc964f69e624"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.334098 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.390947 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.398788 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dd5ae5-2f8f-459e-9790-fc964f69e624-kube-api-access-t8cxv" (OuterVolumeSpecName: "kube-api-access-t8cxv") pod "30dd5ae5-2f8f-459e-9790-fc964f69e624" (UID: "30dd5ae5-2f8f-459e-9790-fc964f69e624"). InnerVolumeSpecName "kube-api-access-t8cxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.492334 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hhlw\" (UniqueName: \"kubernetes.io/projected/7932e662-ab03-4bd6-b360-a21c21c93f1a-kube-api-access-6hhlw\") pod \"7932e662-ab03-4bd6-b360-a21c21c93f1a\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.493119 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-config\") pod \"7932e662-ab03-4bd6-b360-a21c21c93f1a\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.493331 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-dns-svc\") pod \"7932e662-ab03-4bd6-b360-a21c21c93f1a\" (UID: \"7932e662-ab03-4bd6-b360-a21c21c93f1a\") " Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.493896 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8cxv\" (UniqueName: \"kubernetes.io/projected/30dd5ae5-2f8f-459e-9790-fc964f69e624-kube-api-access-t8cxv\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.539654 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7932e662-ab03-4bd6-b360-a21c21c93f1a-kube-api-access-6hhlw" (OuterVolumeSpecName: "kube-api-access-6hhlw") pod "7932e662-ab03-4bd6-b360-a21c21c93f1a" (UID: "7932e662-ab03-4bd6-b360-a21c21c93f1a"). InnerVolumeSpecName "kube-api-access-6hhlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.595488 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hhlw\" (UniqueName: \"kubernetes.io/projected/7932e662-ab03-4bd6-b360-a21c21c93f1a-kube-api-access-6hhlw\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.852873 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc8b7b67-1318-4978-880f-125741025c39","Type":"ContainerStarted","Data":"f3a923c7df30694cc9f1da10c16f928e6ac1a2314ee06df0d1c664cbfe67b2d9"} Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.852954 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.854978 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" event={"ID":"30dd5ae5-2f8f-459e-9790-fc964f69e624","Type":"ContainerDied","Data":"c6fd7993198054ae61886c583d0dee2a12bdd0e57d6e210c325ebe9769e443d9"} Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.855035 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tnc5n" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.855043 4767 scope.go:117] "RemoveContainer" containerID="4650bc3ccc8631f39cd414acc8998cf770eee4c9266cf3dd10b95c194128635d" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.856596 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" event={"ID":"7932e662-ab03-4bd6-b360-a21c21c93f1a","Type":"ContainerDied","Data":"bc28ed509cd7a4d5c33743c04d960b33f60362f9b1e9b980bd82edfd6dc68051"} Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.856623 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-bfk54" Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.858419 4767 generic.go:334] "Generic (PLEG): container finished" podID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerID="760ce807e899554901a08a60a05ff8076155eb3fcfd3b77b0b77560678fba868" exitCode=0 Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.858467 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" event={"ID":"cde46a15-f2ca-40c6-acc9-963d57fac2cf","Type":"ContainerDied","Data":"760ce807e899554901a08a60a05ff8076155eb3fcfd3b77b0b77560678fba868"} Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.859583 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30d319c1-5268-413c-a6db-9d376a2217c3","Type":"ContainerStarted","Data":"b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948"} Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.860899 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e2dc17c-c088-4182-8695-1c09ee22aa06","Type":"ContainerStarted","Data":"06f7d3c85cb93969d856adef8a79645185c658f4841cecd518ecdce18ac02f55"} Nov 24 21:54:15 crc kubenswrapper[4767]: I1124 21:54:15.874099 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.952068832 podStartE2EDuration="21.874078274s" podCreationTimestamp="2025-11-24 21:53:54 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.178901695 +0000 UTC m=+924.095885067" lastFinishedPulling="2025-11-24 21:54:15.100911147 +0000 UTC m=+938.017894509" observedRunningTime="2025-11-24 21:54:15.868750143 +0000 UTC m=+938.785733515" watchObservedRunningTime="2025-11-24 21:54:15.874078274 +0000 UTC m=+938.791061646" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.224648 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30dd5ae5-2f8f-459e-9790-fc964f69e624" (UID: "30dd5ae5-2f8f-459e-9790-fc964f69e624"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.241128 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-config" (OuterVolumeSpecName: "config") pod "7932e662-ab03-4bd6-b360-a21c21c93f1a" (UID: "7932e662-ab03-4bd6-b360-a21c21c93f1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.252498 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7932e662-ab03-4bd6-b360-a21c21c93f1a" (UID: "7932e662-ab03-4bd6-b360-a21c21c93f1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.307773 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.307805 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7932e662-ab03-4bd6-b360-a21c21c93f1a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.307816 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30dd5ae5-2f8f-459e-9790-fc964f69e624-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.481969 4767 scope.go:117] "RemoveContainer" containerID="053980365c25a434532acfd46d1798fea8654350061b08bced5b22e6d88062cf" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.540743 4767 scope.go:117] "RemoveContainer" containerID="fde87114f395280579cf187d2e81c346831f1f3ce71c476d4248b57b11eb84f8" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.700333 4767 scope.go:117] "RemoveContainer" containerID="397e58f003abf94924af029445f6deed4d3850c0384b79a63819a70e9973ce02" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.716814 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tnc5n"] Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.730893 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tnc5n"] Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.736587 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-bfk54"] Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.741172 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-bfk54"] Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.870551 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a77426c-8a5f-427c-accc-fa0de1270f9c","Type":"ContainerStarted","Data":"a6d5bd10ec2d33a024c910ff6571975fa5ae427eb4e8981c76a7fb7e9aec0834"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.871086 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a77426c-8a5f-427c-accc-fa0de1270f9c","Type":"ContainerStarted","Data":"eb05ea4eb7bf66ab497646f1ee70e03f0cd729559f2803172fe9ae422e40564a"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.876168 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" event={"ID":"cde46a15-f2ca-40c6-acc9-963d57fac2cf","Type":"ContainerStarted","Data":"5ec4dcf41a81927e2366c4a4d2a07046687090f74f550ca9c371016fe2b0189f"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.876389 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.878528 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6bq9m" event={"ID":"336d57cd-046c-436a-a596-69890001522f","Type":"ContainerStarted","Data":"3843cb71bc5733f905b0ed37ca0377a836032585d75611964b0779e26372df0d"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.880093 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ntjb8" event={"ID":"b359e7d5-b708-4bf2-9017-48099ff8e287","Type":"ContainerStarted","Data":"5d04849bd124b7606f6dcc5cf705c6a93476432b8d3506501c77e447d0caa841"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.881985 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4" event={"ID":"6e7e218a-3550-499e-8337-5940f98af41c","Type":"ContainerStarted","Data":"aea891e60df0e62553b5eee7a196a99027cd3057d48bdef003de616c09769e0c"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.882488 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ngft4" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.883825 4767 generic.go:334] "Generic (PLEG): container finished" podID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerID="18dd82edff9178b27abf986c3c1aff946300d6195771300aea18e20459cbcbd7" exitCode=0 Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.883904 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" event={"ID":"cf1ee997-d0ba-4242-9cf8-58e7ac123d86","Type":"ContainerDied","Data":"18dd82edff9178b27abf986c3c1aff946300d6195771300aea18e20459cbcbd7"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.885633 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4814045f-5f97-427e-a1bb-3aa438fc2e5d","Type":"ContainerStarted","Data":"3c431e1677781190689da074ff9f853cf7becab34f6d4675aed36b3a8ab62c50"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.885675 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4814045f-5f97-427e-a1bb-3aa438fc2e5d","Type":"ContainerStarted","Data":"31f4cb90d409c97c002f8f950c859d1c8da2d979d9b81dca88653660836882d0"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.886962 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerStarted","Data":"14ef125f4c3d314c8a699b386e82bb5e988d1e9e0cdcdf681db1f9091fed3375"} Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.897154 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.871537785 podStartE2EDuration="15.89712869s" podCreationTimestamp="2025-11-24 21:54:01 +0000 UTC" firstStartedPulling="2025-11-24 21:54:05.970985257 +0000 UTC m=+928.887968629" lastFinishedPulling="2025-11-24 21:54:12.996576162 +0000 UTC m=+935.913559534" observedRunningTime="2025-11-24 21:54:16.89079453 +0000 UTC m=+939.807777892" watchObservedRunningTime="2025-11-24 21:54:16.89712869 +0000 UTC m=+939.814112082" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.922861 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" podStartSLOduration=9.922837269 podStartE2EDuration="9.922837269s" podCreationTimestamp="2025-11-24 21:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:54:16.914808312 +0000 UTC m=+939.831791694" watchObservedRunningTime="2025-11-24 21:54:16.922837269 +0000 UTC m=+939.839820641" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.942127 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.533365026 podStartE2EDuration="18.942097806s" podCreationTimestamp="2025-11-24 21:53:58 +0000 UTC" firstStartedPulling="2025-11-24 21:54:03.48626277 +0000 UTC m=+926.403246182" lastFinishedPulling="2025-11-24 21:54:12.89499559 +0000 UTC m=+935.811978962" observedRunningTime="2025-11-24 21:54:16.939447441 +0000 UTC m=+939.856430833" watchObservedRunningTime="2025-11-24 21:54:16.942097806 +0000 UTC m=+939.859081198" Nov 24 21:54:16 crc kubenswrapper[4767]: I1124 21:54:16.997778 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ngft4" podStartSLOduration=8.691084546 podStartE2EDuration="19.997754945s" podCreationTimestamp="2025-11-24 21:53:57 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.444758548 +0000 UTC m=+924.361741920" lastFinishedPulling="2025-11-24 21:54:12.751428947 +0000 UTC m=+935.668412319" observedRunningTime="2025-11-24 21:54:16.990747856 +0000 UTC m=+939.907731238" watchObservedRunningTime="2025-11-24 21:54:16.997754945 +0000 UTC m=+939.914738317" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.036115 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ntjb8" podStartSLOduration=8.025207224 podStartE2EDuration="16.036076802s" podCreationTimestamp="2025-11-24 21:54:01 +0000 UTC" firstStartedPulling="2025-11-24 21:54:05.278923923 +0000 UTC m=+928.195907295" lastFinishedPulling="2025-11-24 21:54:13.289793501 +0000 UTC m=+936.206776873" observedRunningTime="2025-11-24 21:54:17.03036947 +0000 UTC m=+939.947352852" watchObservedRunningTime="2025-11-24 21:54:17.036076802 +0000 UTC m=+939.953060174" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.599597 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.833217 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.833336 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.904317 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6bq9m" event={"ID":"336d57cd-046c-436a-a596-69890001522f","Type":"ContainerStarted","Data":"e52fe768c6e9eac5377407fe5cbbf66587e791a29a8003dd21019d0a14e2dee3"} Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.904459 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.904700 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.909788 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" event={"ID":"cf1ee997-d0ba-4242-9cf8-58e7ac123d86","Type":"ContainerStarted","Data":"1dc344e19f176599d69fee1b162d726185ca4d78e1a2bb972165b8c2bde36c99"} Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.939689 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6bq9m" podStartSLOduration=10.71923478 podStartE2EDuration="20.939654839s" podCreationTimestamp="2025-11-24 21:53:57 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.588504886 +0000 UTC m=+924.505488258" lastFinishedPulling="2025-11-24 21:54:11.808924935 +0000 UTC m=+934.725908317" observedRunningTime="2025-11-24 21:54:17.928639587 +0000 UTC m=+940.845622979" watchObservedRunningTime="2025-11-24 21:54:17.939654839 +0000 UTC m=+940.856638241" Nov 24 21:54:17 crc kubenswrapper[4767]: I1124 21:54:17.964880 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" podStartSLOduration=10.964847173999999 podStartE2EDuration="10.964847174s" podCreationTimestamp="2025-11-24 21:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:54:17.959539233 +0000 UTC m=+940.876522645" watchObservedRunningTime="2025-11-24 21:54:17.964847174 +0000 UTC m=+940.881830586" Nov 24 21:54:18 crc kubenswrapper[4767]: I1124 21:54:18.323816 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30dd5ae5-2f8f-459e-9790-fc964f69e624" path="/var/lib/kubelet/pods/30dd5ae5-2f8f-459e-9790-fc964f69e624/volumes" Nov 24 21:54:18 crc kubenswrapper[4767]: I1124 21:54:18.324414 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7932e662-ab03-4bd6-b360-a21c21c93f1a" path="/var/lib/kubelet/pods/7932e662-ab03-4bd6-b360-a21c21c93f1a/volumes" Nov 24 21:54:18 crc kubenswrapper[4767]: I1124 21:54:18.921004 4767 generic.go:334] "Generic (PLEG): container finished" podID="b5a55be5-98af-48c4-800f-1595cb7e1959" containerID="c59deb15e8ac6485719d0e694f30bd6464eff8a5275c71f96c0ce38b29417c31" exitCode=0 Nov 24 21:54:18 crc kubenswrapper[4767]: I1124 21:54:18.922390 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5a55be5-98af-48c4-800f-1595cb7e1959","Type":"ContainerDied","Data":"c59deb15e8ac6485719d0e694f30bd6464eff8a5275c71f96c0ce38b29417c31"} Nov 24 21:54:18 crc kubenswrapper[4767]: I1124 21:54:18.923541 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:19 crc kubenswrapper[4767]: I1124 21:54:19.600182 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 21:54:19 crc kubenswrapper[4767]: I1124 21:54:19.937062 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5a55be5-98af-48c4-800f-1595cb7e1959","Type":"ContainerStarted","Data":"7f8815fb58a865fb37cdd7e054a9acc4c34f46837eb87d5bd3bd0b4044957383"} Nov 24 21:54:19 crc kubenswrapper[4767]: I1124 21:54:19.943542 4767 generic.go:334] "Generic (PLEG): container finished" podID="3e2dc17c-c088-4182-8695-1c09ee22aa06" containerID="06f7d3c85cb93969d856adef8a79645185c658f4841cecd518ecdce18ac02f55" exitCode=0 Nov 24 21:54:19 crc kubenswrapper[4767]: I1124 21:54:19.943609 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e2dc17c-c088-4182-8695-1c09ee22aa06","Type":"ContainerDied","Data":"06f7d3c85cb93969d856adef8a79645185c658f4841cecd518ecdce18ac02f55"} Nov 24 21:54:19 crc kubenswrapper[4767]: I1124 21:54:19.992033 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.3348522 podStartE2EDuration="28.99200019s" podCreationTimestamp="2025-11-24 21:53:51 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.151631191 +0000 UTC m=+924.068614563" lastFinishedPulling="2025-11-24 21:54:11.808779151 +0000 UTC m=+934.725762553" observedRunningTime="2025-11-24 21:54:19.968626577 +0000 UTC m=+942.885610009" watchObservedRunningTime="2025-11-24 21:54:19.99200019 +0000 UTC m=+942.908983592" Nov 24 21:54:20 crc kubenswrapper[4767]: I1124 21:54:20.646232 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 21:54:20 crc kubenswrapper[4767]: I1124 21:54:20.879809 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:20 crc kubenswrapper[4767]: I1124 21:54:20.958845 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e2dc17c-c088-4182-8695-1c09ee22aa06","Type":"ContainerStarted","Data":"466bec3d452997da5d0431abd65fc8b3c3d210f457897186ad04c8ff93e84bf5"} Nov 24 21:54:20 crc kubenswrapper[4767]: I1124 21:54:20.996663 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.013770071 podStartE2EDuration="30.996629643s" podCreationTimestamp="2025-11-24 21:53:50 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.436879324 +0000 UTC m=+924.353862696" lastFinishedPulling="2025-11-24 21:54:12.419738886 +0000 UTC m=+935.336722268" observedRunningTime="2025-11-24 21:54:20.98983423 +0000 UTC m=+943.906817642" watchObservedRunningTime="2025-11-24 21:54:20.996629643 +0000 UTC m=+943.913613065" Nov 24 21:54:21 crc kubenswrapper[4767]: I1124 21:54:21.474891 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 21:54:21 crc kubenswrapper[4767]: I1124 21:54:21.475451 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.694595 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.845036 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.845382 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.881719 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.915459 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.972049 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ncctf"] Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.974391 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" podUID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerName="dnsmasq-dns" containerID="cri-o://5ec4dcf41a81927e2366c4a4d2a07046687090f74f550ca9c371016fe2b0189f" gracePeriod=10 Nov 24 21:54:22 crc kubenswrapper[4767]: I1124 21:54:22.996430 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 21:54:23 crc kubenswrapper[4767]: I1124 21:54:23.983374 4767 generic.go:334] "Generic (PLEG): container finished" podID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerID="5ec4dcf41a81927e2366c4a4d2a07046687090f74f550ca9c371016fe2b0189f" exitCode=0 Nov 24 21:54:23 crc kubenswrapper[4767]: I1124 21:54:23.983425 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" event={"ID":"cde46a15-f2ca-40c6-acc9-963d57fac2cf","Type":"ContainerDied","Data":"5ec4dcf41a81927e2366c4a4d2a07046687090f74f550ca9c371016fe2b0189f"} Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.649602 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.892300 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:54:24 crc kubenswrapper[4767]: E1124 21:54:24.892667 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerName="init" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.892687 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerName="init" Nov 24 21:54:24 crc kubenswrapper[4767]: E1124 21:54:24.892705 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerName="init" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.892713 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerName="init" Nov 24 21:54:24 crc kubenswrapper[4767]: E1124 21:54:24.892723 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerName="dnsmasq-dns" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.892731 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerName="dnsmasq-dns" Nov 24 21:54:24 crc kubenswrapper[4767]: E1124 21:54:24.892756 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerName="dnsmasq-dns" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.892763 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerName="dnsmasq-dns" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.892939 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="30dd5ae5-2f8f-459e-9790-fc964f69e624" containerName="dnsmasq-dns" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.892960 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="7932e662-ab03-4bd6-b360-a21c21c93f1a" containerName="dnsmasq-dns" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.894056 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.898031 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.898379 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-b6dmz" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.898619 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.903561 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.907897 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.984214 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00633903-4662-43b6-a25f-0b18b9cdf455-scripts\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.984410 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00633903-4662-43b6-a25f-0b18b9cdf455-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.984636 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.984702 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00633903-4662-43b6-a25f-0b18b9cdf455-config\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.984880 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ctt\" (UniqueName: \"kubernetes.io/projected/00633903-4662-43b6-a25f-0b18b9cdf455-kube-api-access-l5ctt\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.984904 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:24 crc kubenswrapper[4767]: I1124 21:54:24.985033 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.082202 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086175 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086225 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00633903-4662-43b6-a25f-0b18b9cdf455-config\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086289 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5ctt\" (UniqueName: \"kubernetes.io/projected/00633903-4662-43b6-a25f-0b18b9cdf455-kube-api-access-l5ctt\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086307 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086344 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086378 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00633903-4662-43b6-a25f-0b18b9cdf455-scripts\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086405 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00633903-4662-43b6-a25f-0b18b9cdf455-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.086861 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00633903-4662-43b6-a25f-0b18b9cdf455-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.088110 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00633903-4662-43b6-a25f-0b18b9cdf455-config\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.091476 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00633903-4662-43b6-a25f-0b18b9cdf455-scripts\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.094185 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.098305 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.104495 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5ctt\" (UniqueName: \"kubernetes.io/projected/00633903-4662-43b6-a25f-0b18b9cdf455-kube-api-access-l5ctt\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.109969 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00633903-4662-43b6-a25f-0b18b9cdf455-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"00633903-4662-43b6-a25f-0b18b9cdf455\") " pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.167566 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-dcpx8"] Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.168953 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.188578 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-dns-svc\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.188615 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.188685 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-config\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.188703 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.188731 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8b4\" (UniqueName: \"kubernetes.io/projected/9f577f96-f5cf-47b3-aa5c-179164418612-kube-api-access-5w8b4\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.193959 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dcpx8"] Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.214901 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.295324 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-config\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.296095 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.296150 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w8b4\" (UniqueName: \"kubernetes.io/projected/9f577f96-f5cf-47b3-aa5c-179164418612-kube-api-access-5w8b4\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.296247 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-dns-svc\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.296276 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.296382 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-config\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.296899 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.297533 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.298666 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-dns-svc\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.327936 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w8b4\" (UniqueName: \"kubernetes.io/projected/9f577f96-f5cf-47b3-aa5c-179164418612-kube-api-access-5w8b4\") pod \"dnsmasq-dns-698758b865-dcpx8\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.488649 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.745920 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:54:25 crc kubenswrapper[4767]: W1124 21:54:25.750317 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00633903_4662_43b6_a25f_0b18b9cdf455.slice/crio-8b9f66cc5663d48fa61e38139c642267c09e7b3621a75970e17d3f524a66e8ff WatchSource:0}: Error finding container 8b9f66cc5663d48fa61e38139c642267c09e7b3621a75970e17d3f524a66e8ff: Status 404 returned error can't find the container with id 8b9f66cc5663d48fa61e38139c642267c09e7b3621a75970e17d3f524a66e8ff Nov 24 21:54:25 crc kubenswrapper[4767]: W1124 21:54:25.913067 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f577f96_f5cf_47b3_aa5c_179164418612.slice/crio-b200bf7a28bd48922767d649aa1cc3f9be8edfd3d554dda06b720ec961b9b7bc WatchSource:0}: Error finding container b200bf7a28bd48922767d649aa1cc3f9be8edfd3d554dda06b720ec961b9b7bc: Status 404 returned error can't find the container with id b200bf7a28bd48922767d649aa1cc3f9be8edfd3d554dda06b720ec961b9b7bc Nov 24 21:54:25 crc kubenswrapper[4767]: I1124 21:54:25.918604 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dcpx8"] Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.008797 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dcpx8" event={"ID":"9f577f96-f5cf-47b3-aa5c-179164418612","Type":"ContainerStarted","Data":"b200bf7a28bd48922767d649aa1cc3f9be8edfd3d554dda06b720ec961b9b7bc"} Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.013289 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"00633903-4662-43b6-a25f-0b18b9cdf455","Type":"ContainerStarted","Data":"8b9f66cc5663d48fa61e38139c642267c09e7b3621a75970e17d3f524a66e8ff"} Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.336633 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.345149 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.347861 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.348127 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.348240 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-szvh5" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.353379 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.359837 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.416058 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.416121 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/db319bac-943e-4baa-afb0-2089513c8935-lock\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.416212 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.416329 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/db319bac-943e-4baa-afb0-2089513c8935-cache\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.416738 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7fh6\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-kube-api-access-h7fh6\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.517819 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7fh6\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-kube-api-access-h7fh6\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.517877 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.517902 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/db319bac-943e-4baa-afb0-2089513c8935-lock\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.517934 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.517974 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/db319bac-943e-4baa-afb0-2089513c8935-cache\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: E1124 21:54:26.518132 4767 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:54:26 crc kubenswrapper[4767]: E1124 21:54:26.518178 4767 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:54:26 crc kubenswrapper[4767]: E1124 21:54:26.518293 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift podName:db319bac-943e-4baa-afb0-2089513c8935 nodeName:}" failed. No retries permitted until 2025-11-24 21:54:27.018233986 +0000 UTC m=+949.935217578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift") pod "swift-storage-0" (UID: "db319bac-943e-4baa-afb0-2089513c8935") : configmap "swift-ring-files" not found Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.518576 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/db319bac-943e-4baa-afb0-2089513c8935-cache\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.518732 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/db319bac-943e-4baa-afb0-2089513c8935-lock\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.518858 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.537824 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7fh6\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-kube-api-access-h7fh6\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:26 crc kubenswrapper[4767]: I1124 21:54:26.542662 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.026705 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:27 crc kubenswrapper[4767]: E1124 21:54:27.027315 4767 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:54:27 crc kubenswrapper[4767]: E1124 21:54:27.027350 4767 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:54:27 crc kubenswrapper[4767]: E1124 21:54:27.032124 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift podName:db319bac-943e-4baa-afb0-2089513c8935 nodeName:}" failed. No retries permitted until 2025-11-24 21:54:28.032085465 +0000 UTC m=+950.949068877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift") pod "swift-storage-0" (UID: "db319bac-943e-4baa-afb0-2089513c8935") : configmap "swift-ring-files" not found Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.425784 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.536384 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r9z7\" (UniqueName: \"kubernetes.io/projected/cde46a15-f2ca-40c6-acc9-963d57fac2cf-kube-api-access-2r9z7\") pod \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.536472 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-config\") pod \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.536514 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-dns-svc\") pod \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.536552 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-ovsdbserver-sb\") pod \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\" (UID: \"cde46a15-f2ca-40c6-acc9-963d57fac2cf\") " Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.540918 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde46a15-f2ca-40c6-acc9-963d57fac2cf-kube-api-access-2r9z7" (OuterVolumeSpecName: "kube-api-access-2r9z7") pod "cde46a15-f2ca-40c6-acc9-963d57fac2cf" (UID: "cde46a15-f2ca-40c6-acc9-963d57fac2cf"). InnerVolumeSpecName "kube-api-access-2r9z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.580802 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cde46a15-f2ca-40c6-acc9-963d57fac2cf" (UID: "cde46a15-f2ca-40c6-acc9-963d57fac2cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.581158 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cde46a15-f2ca-40c6-acc9-963d57fac2cf" (UID: "cde46a15-f2ca-40c6-acc9-963d57fac2cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.583806 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-config" (OuterVolumeSpecName: "config") pod "cde46a15-f2ca-40c6-acc9-963d57fac2cf" (UID: "cde46a15-f2ca-40c6-acc9-963d57fac2cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.638842 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r9z7\" (UniqueName: \"kubernetes.io/projected/cde46a15-f2ca-40c6-acc9-963d57fac2cf-kube-api-access-2r9z7\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.638884 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.638892 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:27 crc kubenswrapper[4767]: I1124 21:54:27.638901 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cde46a15-f2ca-40c6-acc9-963d57fac2cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.044907 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:28 crc kubenswrapper[4767]: E1124 21:54:28.045157 4767 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:54:28 crc kubenswrapper[4767]: E1124 21:54:28.045180 4767 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:54:28 crc kubenswrapper[4767]: E1124 21:54:28.045228 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift podName:db319bac-943e-4baa-afb0-2089513c8935 nodeName:}" failed. No retries permitted until 2025-11-24 21:54:30.045210499 +0000 UTC m=+952.962193871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift") pod "swift-storage-0" (UID: "db319bac-943e-4baa-afb0-2089513c8935") : configmap "swift-ring-files" not found Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.049036 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dcpx8" event={"ID":"9f577f96-f5cf-47b3-aa5c-179164418612","Type":"ContainerStarted","Data":"233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39"} Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.052398 4767 generic.go:334] "Generic (PLEG): container finished" podID="9fa46701-7516-4376-a72b-10c3eca271f8" containerID="14ef125f4c3d314c8a699b386e82bb5e988d1e9e0cdcdf681db1f9091fed3375" exitCode=0 Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.052445 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerDied","Data":"14ef125f4c3d314c8a699b386e82bb5e988d1e9e0cdcdf681db1f9091fed3375"} Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.064685 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" event={"ID":"cde46a15-f2ca-40c6-acc9-963d57fac2cf","Type":"ContainerDied","Data":"cbf40e98d8d73c6638cc3cd36792165154b7253efa5aa2677997df65c2743577"} Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.064756 4767 scope.go:117] "RemoveContainer" containerID="5ec4dcf41a81927e2366c4a4d2a07046687090f74f550ca9c371016fe2b0189f" Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.064907 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ncctf" Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.142407 4767 scope.go:117] "RemoveContainer" containerID="760ce807e899554901a08a60a05ff8076155eb3fcfd3b77b0b77560678fba868" Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.146056 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ncctf"] Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.154657 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ncctf"] Nov 24 21:54:28 crc kubenswrapper[4767]: I1124 21:54:28.333157 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" path="/var/lib/kubelet/pods/cde46a15-f2ca-40c6-acc9-963d57fac2cf/volumes" Nov 24 21:54:29 crc kubenswrapper[4767]: I1124 21:54:29.078810 4767 generic.go:334] "Generic (PLEG): container finished" podID="9f577f96-f5cf-47b3-aa5c-179164418612" containerID="233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39" exitCode=0 Nov 24 21:54:29 crc kubenswrapper[4767]: I1124 21:54:29.078951 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dcpx8" event={"ID":"9f577f96-f5cf-47b3-aa5c-179164418612","Type":"ContainerDied","Data":"233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39"} Nov 24 21:54:29 crc kubenswrapper[4767]: I1124 21:54:29.083229 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"00633903-4662-43b6-a25f-0b18b9cdf455","Type":"ContainerStarted","Data":"33b6dc30bb82693a9fde84600338575a90116b01e01cc541d6fceede2ed92b72"} Nov 24 21:54:29 crc kubenswrapper[4767]: I1124 21:54:29.083260 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"00633903-4662-43b6-a25f-0b18b9cdf455","Type":"ContainerStarted","Data":"e5837dea69ef439a8e595d2f0928bfd444df78d15095d3a18b7263a9f53ef8b4"} Nov 24 21:54:29 crc kubenswrapper[4767]: I1124 21:54:29.083421 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 21:54:29 crc kubenswrapper[4767]: I1124 21:54:29.136307 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.209884177 podStartE2EDuration="5.136284136s" podCreationTimestamp="2025-11-24 21:54:24 +0000 UTC" firstStartedPulling="2025-11-24 21:54:25.752749867 +0000 UTC m=+948.669733239" lastFinishedPulling="2025-11-24 21:54:28.679149826 +0000 UTC m=+951.596133198" observedRunningTime="2025-11-24 21:54:29.12268334 +0000 UTC m=+952.039666722" watchObservedRunningTime="2025-11-24 21:54:29.136284136 +0000 UTC m=+952.053267508" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.085751 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:30 crc kubenswrapper[4767]: E1124 21:54:30.085963 4767 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:54:30 crc kubenswrapper[4767]: E1124 21:54:30.086300 4767 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:54:30 crc kubenswrapper[4767]: E1124 21:54:30.086384 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift podName:db319bac-943e-4baa-afb0-2089513c8935 nodeName:}" failed. No retries permitted until 2025-11-24 21:54:34.086360842 +0000 UTC m=+957.003344224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift") pod "swift-storage-0" (UID: "db319bac-943e-4baa-afb0-2089513c8935") : configmap "swift-ring-files" not found Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.097151 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dcpx8" event={"ID":"9f577f96-f5cf-47b3-aa5c-179164418612","Type":"ContainerStarted","Data":"33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28"} Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.097554 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.121128 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-dcpx8" podStartSLOduration=5.121104728 podStartE2EDuration="5.121104728s" podCreationTimestamp="2025-11-24 21:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:54:30.11763944 +0000 UTC m=+953.034622812" watchObservedRunningTime="2025-11-24 21:54:30.121104728 +0000 UTC m=+953.038088100" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.251136 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-pfdzc"] Nov 24 21:54:30 crc kubenswrapper[4767]: E1124 21:54:30.251607 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerName="init" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.251631 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerName="init" Nov 24 21:54:30 crc kubenswrapper[4767]: E1124 21:54:30.251652 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerName="dnsmasq-dns" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.251661 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerName="dnsmasq-dns" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.251973 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde46a15-f2ca-40c6-acc9-963d57fac2cf" containerName="dnsmasq-dns" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.253525 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.256072 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.256353 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.261615 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.261608 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pfdzc"] Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.391010 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-dispersionconf\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.391077 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/084fdc28-199d-44c7-93c8-67792c6f4829-etc-swift\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.391120 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-scripts\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.391155 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-combined-ca-bundle\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.391588 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-swiftconf\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.391743 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-ring-data-devices\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.391869 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7f9q\" (UniqueName: \"kubernetes.io/projected/084fdc28-199d-44c7-93c8-67792c6f4829-kube-api-access-v7f9q\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.493376 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-combined-ca-bundle\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.493524 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-swiftconf\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.493580 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-ring-data-devices\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.493628 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7f9q\" (UniqueName: \"kubernetes.io/projected/084fdc28-199d-44c7-93c8-67792c6f4829-kube-api-access-v7f9q\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.493661 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-dispersionconf\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.493713 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/084fdc28-199d-44c7-93c8-67792c6f4829-etc-swift\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.493784 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-scripts\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.494712 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-scripts\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.494728 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/084fdc28-199d-44c7-93c8-67792c6f4829-etc-swift\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.494765 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-ring-data-devices\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.500465 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-dispersionconf\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.501638 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-combined-ca-bundle\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.509413 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-swiftconf\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.515064 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7f9q\" (UniqueName: \"kubernetes.io/projected/084fdc28-199d-44c7-93c8-67792c6f4829-kube-api-access-v7f9q\") pod \"swift-ring-rebalance-pfdzc\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:30 crc kubenswrapper[4767]: I1124 21:54:30.586134 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:31 crc kubenswrapper[4767]: I1124 21:54:31.040645 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pfdzc"] Nov 24 21:54:31 crc kubenswrapper[4767]: W1124 21:54:31.062140 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod084fdc28_199d_44c7_93c8_67792c6f4829.slice/crio-1156e5d947212ed814e75295c69ae520bcd133ef7db0c8509ec00e3942532cca WatchSource:0}: Error finding container 1156e5d947212ed814e75295c69ae520bcd133ef7db0c8509ec00e3942532cca: Status 404 returned error can't find the container with id 1156e5d947212ed814e75295c69ae520bcd133ef7db0c8509ec00e3942532cca Nov 24 21:54:31 crc kubenswrapper[4767]: I1124 21:54:31.106057 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfdzc" event={"ID":"084fdc28-199d-44c7-93c8-67792c6f4829","Type":"ContainerStarted","Data":"1156e5d947212ed814e75295c69ae520bcd133ef7db0c8509ec00e3942532cca"} Nov 24 21:54:31 crc kubenswrapper[4767]: I1124 21:54:31.181662 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 21:54:31 crc kubenswrapper[4767]: I1124 21:54:31.262240 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 21:54:31 crc kubenswrapper[4767]: I1124 21:54:31.628079 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 21:54:31 crc kubenswrapper[4767]: I1124 21:54:31.728610 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.646315 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-358f-account-create-4kwkf"] Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.648168 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.650542 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.657000 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-358f-account-create-4kwkf"] Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.711132 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2grc5"] Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.712340 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.721116 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2grc5"] Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.750766 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cq4v\" (UniqueName: \"kubernetes.io/projected/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-kube-api-access-8cq4v\") pod \"keystone-358f-account-create-4kwkf\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.750864 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-operator-scripts\") pod \"keystone-358f-account-create-4kwkf\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.852583 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cq4v\" (UniqueName: \"kubernetes.io/projected/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-kube-api-access-8cq4v\") pod \"keystone-358f-account-create-4kwkf\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.852639 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-operator-scripts\") pod \"keystone-358f-account-create-4kwkf\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.852670 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0863e69a-b331-4647-a79c-d0a2e182f14d-operator-scripts\") pod \"keystone-db-create-2grc5\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.852697 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks755\" (UniqueName: \"kubernetes.io/projected/0863e69a-b331-4647-a79c-d0a2e182f14d-kube-api-access-ks755\") pod \"keystone-db-create-2grc5\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.853619 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-operator-scripts\") pod \"keystone-358f-account-create-4kwkf\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.877398 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cq4v\" (UniqueName: \"kubernetes.io/projected/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-kube-api-access-8cq4v\") pod \"keystone-358f-account-create-4kwkf\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.947899 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9sh7l"] Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.949325 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.953762 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0863e69a-b331-4647-a79c-d0a2e182f14d-operator-scripts\") pod \"keystone-db-create-2grc5\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.953812 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks755\" (UniqueName: \"kubernetes.io/projected/0863e69a-b331-4647-a79c-d0a2e182f14d-kube-api-access-ks755\") pod \"keystone-db-create-2grc5\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.954665 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0863e69a-b331-4647-a79c-d0a2e182f14d-operator-scripts\") pod \"keystone-db-create-2grc5\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.956587 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8d7e-account-create-vv26x"] Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.958006 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.959627 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.962310 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9sh7l"] Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.971045 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks755\" (UniqueName: \"kubernetes.io/projected/0863e69a-b331-4647-a79c-d0a2e182f14d-kube-api-access-ks755\") pod \"keystone-db-create-2grc5\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.975621 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:32 crc kubenswrapper[4767]: I1124 21:54:32.988987 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8d7e-account-create-vv26x"] Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.032722 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.054852 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsh6\" (UniqueName: \"kubernetes.io/projected/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-kube-api-access-9wsh6\") pod \"placement-db-create-9sh7l\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.054957 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-operator-scripts\") pod \"placement-db-create-9sh7l\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.054988 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwqc\" (UniqueName: \"kubernetes.io/projected/2a407b22-b744-42f8-9746-30f7b21c8e2b-kube-api-access-2zwqc\") pod \"placement-8d7e-account-create-vv26x\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.055011 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a407b22-b744-42f8-9746-30f7b21c8e2b-operator-scripts\") pod \"placement-8d7e-account-create-vv26x\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.156837 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsh6\" (UniqueName: \"kubernetes.io/projected/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-kube-api-access-9wsh6\") pod \"placement-db-create-9sh7l\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.157022 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-operator-scripts\") pod \"placement-db-create-9sh7l\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.157065 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zwqc\" (UniqueName: \"kubernetes.io/projected/2a407b22-b744-42f8-9746-30f7b21c8e2b-kube-api-access-2zwqc\") pod \"placement-8d7e-account-create-vv26x\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.157116 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a407b22-b744-42f8-9746-30f7b21c8e2b-operator-scripts\") pod \"placement-8d7e-account-create-vv26x\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.157817 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-operator-scripts\") pod \"placement-db-create-9sh7l\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.159385 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a407b22-b744-42f8-9746-30f7b21c8e2b-operator-scripts\") pod \"placement-8d7e-account-create-vv26x\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.172868 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rcbjg"] Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.174354 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.177868 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsh6\" (UniqueName: \"kubernetes.io/projected/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-kube-api-access-9wsh6\") pod \"placement-db-create-9sh7l\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.197940 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rcbjg"] Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.214838 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zwqc\" (UniqueName: \"kubernetes.io/projected/2a407b22-b744-42f8-9746-30f7b21c8e2b-kube-api-access-2zwqc\") pod \"placement-8d7e-account-create-vv26x\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.261438 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6da9a-e416-4a02-8507-1a4caabc88c6-operator-scripts\") pod \"glance-db-create-rcbjg\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.261591 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jfn\" (UniqueName: \"kubernetes.io/projected/4ef6da9a-e416-4a02-8507-1a4caabc88c6-kube-api-access-w8jfn\") pod \"glance-db-create-rcbjg\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.304004 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c148-account-create-dm25n"] Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.308909 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.315462 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.327964 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.332002 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.357730 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c148-account-create-dm25n"] Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.364175 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6da9a-e416-4a02-8507-1a4caabc88c6-operator-scripts\") pod \"glance-db-create-rcbjg\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.364526 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jfn\" (UniqueName: \"kubernetes.io/projected/4ef6da9a-e416-4a02-8507-1a4caabc88c6-kube-api-access-w8jfn\") pod \"glance-db-create-rcbjg\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.366292 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6da9a-e416-4a02-8507-1a4caabc88c6-operator-scripts\") pod \"glance-db-create-rcbjg\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.409351 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jfn\" (UniqueName: \"kubernetes.io/projected/4ef6da9a-e416-4a02-8507-1a4caabc88c6-kube-api-access-w8jfn\") pod \"glance-db-create-rcbjg\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.466673 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2xz\" (UniqueName: \"kubernetes.io/projected/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-kube-api-access-jt2xz\") pod \"glance-c148-account-create-dm25n\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.466788 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-operator-scripts\") pod \"glance-c148-account-create-dm25n\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.568980 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2xz\" (UniqueName: \"kubernetes.io/projected/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-kube-api-access-jt2xz\") pod \"glance-c148-account-create-dm25n\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.569110 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-operator-scripts\") pod \"glance-c148-account-create-dm25n\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.570109 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-operator-scripts\") pod \"glance-c148-account-create-dm25n\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.588888 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2xz\" (UniqueName: \"kubernetes.io/projected/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-kube-api-access-jt2xz\") pod \"glance-c148-account-create-dm25n\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.592202 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:33 crc kubenswrapper[4767]: I1124 21:54:33.652883 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:34 crc kubenswrapper[4767]: I1124 21:54:34.180384 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:34 crc kubenswrapper[4767]: E1124 21:54:34.180588 4767 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:54:34 crc kubenswrapper[4767]: E1124 21:54:34.180851 4767 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:54:34 crc kubenswrapper[4767]: E1124 21:54:34.180931 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift podName:db319bac-943e-4baa-afb0-2089513c8935 nodeName:}" failed. No retries permitted until 2025-11-24 21:54:42.180905485 +0000 UTC m=+965.097888867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift") pod "swift-storage-0" (UID: "db319bac-943e-4baa-afb0-2089513c8935") : configmap "swift-ring-files" not found Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.176532 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-48vjg"] Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.179578 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.193096 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-48vjg"] Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.274390 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-134c-account-create-bkvsg"] Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.276356 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.278661 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.285673 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-134c-account-create-bkvsg"] Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.300327 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9e387f-20cc-4618-915a-bf9a33b40ddd-operator-scripts\") pod \"watcher-db-create-48vjg\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.300505 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2gz7\" (UniqueName: \"kubernetes.io/projected/cd9e387f-20cc-4618-915a-bf9a33b40ddd-kube-api-access-k2gz7\") pod \"watcher-db-create-48vjg\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.401670 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ce5857-f490-47b9-b07d-ecf4d1aa2045-operator-scripts\") pod \"watcher-134c-account-create-bkvsg\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.401813 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2gz7\" (UniqueName: \"kubernetes.io/projected/cd9e387f-20cc-4618-915a-bf9a33b40ddd-kube-api-access-k2gz7\") pod \"watcher-db-create-48vjg\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.401851 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4q8\" (UniqueName: \"kubernetes.io/projected/88ce5857-f490-47b9-b07d-ecf4d1aa2045-kube-api-access-vn4q8\") pod \"watcher-134c-account-create-bkvsg\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.401931 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9e387f-20cc-4618-915a-bf9a33b40ddd-operator-scripts\") pod \"watcher-db-create-48vjg\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.402912 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9e387f-20cc-4618-915a-bf9a33b40ddd-operator-scripts\") pod \"watcher-db-create-48vjg\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.421796 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2gz7\" (UniqueName: \"kubernetes.io/projected/cd9e387f-20cc-4618-915a-bf9a33b40ddd-kube-api-access-k2gz7\") pod \"watcher-db-create-48vjg\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.491055 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.503492 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ce5857-f490-47b9-b07d-ecf4d1aa2045-operator-scripts\") pod \"watcher-134c-account-create-bkvsg\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.503628 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn4q8\" (UniqueName: \"kubernetes.io/projected/88ce5857-f490-47b9-b07d-ecf4d1aa2045-kube-api-access-vn4q8\") pod \"watcher-134c-account-create-bkvsg\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.503969 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.507528 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ce5857-f490-47b9-b07d-ecf4d1aa2045-operator-scripts\") pod \"watcher-134c-account-create-bkvsg\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.526631 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn4q8\" (UniqueName: \"kubernetes.io/projected/88ce5857-f490-47b9-b07d-ecf4d1aa2045-kube-api-access-vn4q8\") pod \"watcher-134c-account-create-bkvsg\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.554555 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v86z4"] Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.555049 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" podUID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerName="dnsmasq-dns" containerID="cri-o://1dc344e19f176599d69fee1b162d726185ca4d78e1a2bb972165b8c2bde36c99" gracePeriod=10 Nov 24 21:54:35 crc kubenswrapper[4767]: I1124 21:54:35.597045 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.165438 4767 generic.go:334] "Generic (PLEG): container finished" podID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerID="1dc344e19f176599d69fee1b162d726185ca4d78e1a2bb972165b8c2bde36c99" exitCode=0 Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.165494 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" event={"ID":"cf1ee997-d0ba-4242-9cf8-58e7ac123d86","Type":"ContainerDied","Data":"1dc344e19f176599d69fee1b162d726185ca4d78e1a2bb972165b8c2bde36c99"} Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.563949 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.631939 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pshmw\" (UniqueName: \"kubernetes.io/projected/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-kube-api-access-pshmw\") pod \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.632199 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-sb\") pod \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.632244 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-config\") pod \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.632360 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-dns-svc\") pod \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.632409 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-nb\") pod \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\" (UID: \"cf1ee997-d0ba-4242-9cf8-58e7ac123d86\") " Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.646403 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-kube-api-access-pshmw" (OuterVolumeSpecName: "kube-api-access-pshmw") pod "cf1ee997-d0ba-4242-9cf8-58e7ac123d86" (UID: "cf1ee997-d0ba-4242-9cf8-58e7ac123d86"). InnerVolumeSpecName "kube-api-access-pshmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.735129 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pshmw\" (UniqueName: \"kubernetes.io/projected/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-kube-api-access-pshmw\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.786046 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf1ee997-d0ba-4242-9cf8-58e7ac123d86" (UID: "cf1ee997-d0ba-4242-9cf8-58e7ac123d86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.785745 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf1ee997-d0ba-4242-9cf8-58e7ac123d86" (UID: "cf1ee997-d0ba-4242-9cf8-58e7ac123d86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.792480 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf1ee997-d0ba-4242-9cf8-58e7ac123d86" (UID: "cf1ee997-d0ba-4242-9cf8-58e7ac123d86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.795048 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-config" (OuterVolumeSpecName: "config") pod "cf1ee997-d0ba-4242-9cf8-58e7ac123d86" (UID: "cf1ee997-d0ba-4242-9cf8-58e7ac123d86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.842233 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.842284 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.842298 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.842308 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf1ee997-d0ba-4242-9cf8-58e7ac123d86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.921984 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-48vjg"] Nov 24 21:54:36 crc kubenswrapper[4767]: W1124 21:54:36.929717 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9e387f_20cc_4618_915a_bf9a33b40ddd.slice/crio-67a036bccb618ec0cb0b23a9315b5d452d67db54506c5ba2d77eab6c9c1b66b4 WatchSource:0}: Error finding container 67a036bccb618ec0cb0b23a9315b5d452d67db54506c5ba2d77eab6c9c1b66b4: Status 404 returned error can't find the container with id 67a036bccb618ec0cb0b23a9315b5d452d67db54506c5ba2d77eab6c9c1b66b4 Nov 24 21:54:36 crc kubenswrapper[4767]: W1124 21:54:36.936599 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a407b22_b744_42f8_9746_30f7b21c8e2b.slice/crio-af0f9f5b42bfae4b91e13bf8a711449263378d38b82f3228687ea0b8cb56ca4c WatchSource:0}: Error finding container af0f9f5b42bfae4b91e13bf8a711449263378d38b82f3228687ea0b8cb56ca4c: Status 404 returned error can't find the container with id af0f9f5b42bfae4b91e13bf8a711449263378d38b82f3228687ea0b8cb56ca4c Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.942373 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8d7e-account-create-vv26x"] Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.950965 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9sh7l"] Nov 24 21:54:36 crc kubenswrapper[4767]: I1124 21:54:36.960338 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rcbjg"] Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.110372 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c148-account-create-dm25n"] Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.120385 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2grc5"] Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.128403 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-134c-account-create-bkvsg"] Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.137294 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-358f-account-create-4kwkf"] Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.176123 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" event={"ID":"cf1ee997-d0ba-4242-9cf8-58e7ac123d86","Type":"ContainerDied","Data":"5363a4ba9bd1bc79fd14d1346783e5b27cffe3173add6425655e5c9147812311"} Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.176203 4767 scope.go:117] "RemoveContainer" containerID="1dc344e19f176599d69fee1b162d726185ca4d78e1a2bb972165b8c2bde36c99" Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.176609 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v86z4" Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.177648 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-48vjg" event={"ID":"cd9e387f-20cc-4618-915a-bf9a33b40ddd","Type":"ContainerStarted","Data":"67a036bccb618ec0cb0b23a9315b5d452d67db54506c5ba2d77eab6c9c1b66b4"} Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.179259 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7e-account-create-vv26x" event={"ID":"2a407b22-b744-42f8-9746-30f7b21c8e2b","Type":"ContainerStarted","Data":"af0f9f5b42bfae4b91e13bf8a711449263378d38b82f3228687ea0b8cb56ca4c"} Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.212037 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v86z4"] Nov 24 21:54:37 crc kubenswrapper[4767]: I1124 21:54:37.220009 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v86z4"] Nov 24 21:54:39 crc kubenswrapper[4767]: W1124 21:54:37.699958 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ef6da9a_e416_4a02_8507_1a4caabc88c6.slice/crio-fe079b39362fa60d3be56b6dd5e684d608c03b1c3d0802d671ba049379454523 WatchSource:0}: Error finding container fe079b39362fa60d3be56b6dd5e684d608c03b1c3d0802d671ba049379454523: Status 404 returned error can't find the container with id fe079b39362fa60d3be56b6dd5e684d608c03b1c3d0802d671ba049379454523 Nov 24 21:54:39 crc kubenswrapper[4767]: W1124 21:54:37.708593 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92bd7ac9_4d3e_4e41_8cc6_03fd71a99bda.slice/crio-7726e2d3c62968ac3ed3b027a546b2b25995dd49a494b8416d2c5f725dc26c45 WatchSource:0}: Error finding container 7726e2d3c62968ac3ed3b027a546b2b25995dd49a494b8416d2c5f725dc26c45: Status 404 returned error can't find the container with id 7726e2d3c62968ac3ed3b027a546b2b25995dd49a494b8416d2c5f725dc26c45 Nov 24 21:54:39 crc kubenswrapper[4767]: W1124 21:54:37.726447 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6045c07d_e6f9_4bd9_9a6e_e60f4b7b5910.slice/crio-979260ac3ba0afe83901dd75958d83ff129cbb1ed4de204b96662665e443a0d1 WatchSource:0}: Error finding container 979260ac3ba0afe83901dd75958d83ff129cbb1ed4de204b96662665e443a0d1: Status 404 returned error can't find the container with id 979260ac3ba0afe83901dd75958d83ff129cbb1ed4de204b96662665e443a0d1 Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:37.749195 4767 scope.go:117] "RemoveContainer" containerID="18dd82edff9178b27abf986c3c1aff946300d6195771300aea18e20459cbcbd7" Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:38.190661 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9sh7l" event={"ID":"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec","Type":"ContainerStarted","Data":"28f24d94d6438d3d6d28331531b7753d34c5acf152e1dacbdc7723b85e6b2ed6"} Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:38.192092 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rcbjg" event={"ID":"4ef6da9a-e416-4a02-8507-1a4caabc88c6","Type":"ContainerStarted","Data":"fe079b39362fa60d3be56b6dd5e684d608c03b1c3d0802d671ba049379454523"} Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:38.193544 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2grc5" event={"ID":"0863e69a-b331-4647-a79c-d0a2e182f14d","Type":"ContainerStarted","Data":"a727d481cc436bcb5035eda95fd2ae90bac7b59e33c0505d147883dbdd71636d"} Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:38.195220 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c148-account-create-dm25n" event={"ID":"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda","Type":"ContainerStarted","Data":"7726e2d3c62968ac3ed3b027a546b2b25995dd49a494b8416d2c5f725dc26c45"} Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:38.197507 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-134c-account-create-bkvsg" event={"ID":"88ce5857-f490-47b9-b07d-ecf4d1aa2045","Type":"ContainerStarted","Data":"1f6528d1e48f694f9b572bee508ca1566e66c9019d083c7333a24e14f429eeae"} Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:38.198773 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-358f-account-create-4kwkf" event={"ID":"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910","Type":"ContainerStarted","Data":"979260ac3ba0afe83901dd75958d83ff129cbb1ed4de204b96662665e443a0d1"} Nov 24 21:54:39 crc kubenswrapper[4767]: I1124 21:54:38.344329 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" path="/var/lib/kubelet/pods/cf1ee997-d0ba-4242-9cf8-58e7ac123d86/volumes" Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.221955 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c148-account-create-dm25n" event={"ID":"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda","Type":"ContainerStarted","Data":"42e8d69a9b7ae679681c36ea4306c85a14aa8a30600f4e43ce0863d683fb17f2"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.226106 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-358f-account-create-4kwkf" event={"ID":"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910","Type":"ContainerStarted","Data":"616dca5308755e8442e9b46ff10ad31fc5d023330a20b0b7c2234de4dfd44409"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.229877 4767 generic.go:334] "Generic (PLEG): container finished" podID="ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec" containerID="0fdb3d324be3afeb8f665f4f6af799fd5b2e02d9080fe4f849eaea25ec631cfd" exitCode=0 Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.230020 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9sh7l" event={"ID":"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec","Type":"ContainerDied","Data":"0fdb3d324be3afeb8f665f4f6af799fd5b2e02d9080fe4f849eaea25ec631cfd"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.234699 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerStarted","Data":"941bf4b90f241ba71fc8a7839ff3dfbd09862e995b99fa8ddd7b52c1bf32771b"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.242544 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfdzc" event={"ID":"084fdc28-199d-44c7-93c8-67792c6f4829","Type":"ContainerStarted","Data":"9a2852d147c26cff0f2ea6dad65d677e38146207ea8fcb744eec52baa03cfb38"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.244481 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-c148-account-create-dm25n" podStartSLOduration=7.244452663 podStartE2EDuration="7.244452663s" podCreationTimestamp="2025-11-24 21:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:54:40.239590275 +0000 UTC m=+963.156573647" watchObservedRunningTime="2025-11-24 21:54:40.244452663 +0000 UTC m=+963.161436055" Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.252778 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rcbjg" event={"ID":"4ef6da9a-e416-4a02-8507-1a4caabc88c6","Type":"ContainerStarted","Data":"b833bf0b41cb962f95dee8a4b67a1b3dfd1aecdcc88114f7b2f2b08bfa908533"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.265250 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-48vjg" event={"ID":"cd9e387f-20cc-4618-915a-bf9a33b40ddd","Type":"ContainerStarted","Data":"19796438dd1357f69a0b3d3d0895eec9b7adfcadbd8a8ad951fae7b36d6f06b0"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.267057 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2grc5" event={"ID":"0863e69a-b331-4647-a79c-d0a2e182f14d","Type":"ContainerStarted","Data":"2ec56342422b53837226f85e0e0d7e21d21742ef716f68bff45b4b2314bca895"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.267258 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-358f-account-create-4kwkf" podStartSLOduration=8.267231339 podStartE2EDuration="8.267231339s" podCreationTimestamp="2025-11-24 21:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:54:40.259341896 +0000 UTC m=+963.176325278" watchObservedRunningTime="2025-11-24 21:54:40.267231339 +0000 UTC m=+963.184214731" Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.271050 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-134c-account-create-bkvsg" event={"ID":"88ce5857-f490-47b9-b07d-ecf4d1aa2045","Type":"ContainerStarted","Data":"527b49a76fd817305e9d545e14ee5cd6a34b82a4678630c60cb764c88d049326"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.275016 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7e-account-create-vv26x" event={"ID":"2a407b22-b744-42f8-9746-30f7b21c8e2b","Type":"ContainerStarted","Data":"1090b95d987f7a1ed0cf64ecf9ab93d603b564a39d35ea1fe16054568cbbd445"} Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.281624 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.292620 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-rcbjg" podStartSLOduration=7.2925967289999996 podStartE2EDuration="7.292596729s" podCreationTimestamp="2025-11-24 21:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:54:40.287969018 +0000 UTC m=+963.204952390" watchObservedRunningTime="2025-11-24 21:54:40.292596729 +0000 UTC m=+963.209580101" Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.308323 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-pfdzc" podStartSLOduration=5.105691743 podStartE2EDuration="10.308302375s" podCreationTimestamp="2025-11-24 21:54:30 +0000 UTC" firstStartedPulling="2025-11-24 21:54:31.066173321 +0000 UTC m=+953.983156693" lastFinishedPulling="2025-11-24 21:54:36.268783953 +0000 UTC m=+959.185767325" observedRunningTime="2025-11-24 21:54:40.304722063 +0000 UTC m=+963.221705435" watchObservedRunningTime="2025-11-24 21:54:40.308302375 +0000 UTC m=+963.225285747" Nov 24 21:54:40 crc kubenswrapper[4767]: I1124 21:54:40.364599 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-134c-account-create-bkvsg" podStartSLOduration=5.364574801 podStartE2EDuration="5.364574801s" podCreationTimestamp="2025-11-24 21:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:54:40.361205976 +0000 UTC m=+963.278189378" watchObservedRunningTime="2025-11-24 21:54:40.364574801 +0000 UTC m=+963.281558173" Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.287139 4767 generic.go:334] "Generic (PLEG): container finished" podID="0863e69a-b331-4647-a79c-d0a2e182f14d" containerID="2ec56342422b53837226f85e0e0d7e21d21742ef716f68bff45b4b2314bca895" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.287323 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2grc5" event={"ID":"0863e69a-b331-4647-a79c-d0a2e182f14d","Type":"ContainerDied","Data":"2ec56342422b53837226f85e0e0d7e21d21742ef716f68bff45b4b2314bca895"} Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.289764 4767 generic.go:334] "Generic (PLEG): container finished" podID="2a407b22-b744-42f8-9746-30f7b21c8e2b" containerID="1090b95d987f7a1ed0cf64ecf9ab93d603b564a39d35ea1fe16054568cbbd445" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.289913 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7e-account-create-vv26x" event={"ID":"2a407b22-b744-42f8-9746-30f7b21c8e2b","Type":"ContainerDied","Data":"1090b95d987f7a1ed0cf64ecf9ab93d603b564a39d35ea1fe16054568cbbd445"} Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.292401 4767 generic.go:334] "Generic (PLEG): container finished" podID="92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda" containerID="42e8d69a9b7ae679681c36ea4306c85a14aa8a30600f4e43ce0863d683fb17f2" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.292504 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c148-account-create-dm25n" event={"ID":"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda","Type":"ContainerDied","Data":"42e8d69a9b7ae679681c36ea4306c85a14aa8a30600f4e43ce0863d683fb17f2"} Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.294539 4767 generic.go:334] "Generic (PLEG): container finished" podID="88ce5857-f490-47b9-b07d-ecf4d1aa2045" containerID="527b49a76fd817305e9d545e14ee5cd6a34b82a4678630c60cb764c88d049326" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.294578 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-134c-account-create-bkvsg" event={"ID":"88ce5857-f490-47b9-b07d-ecf4d1aa2045","Type":"ContainerDied","Data":"527b49a76fd817305e9d545e14ee5cd6a34b82a4678630c60cb764c88d049326"} Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.298889 4767 generic.go:334] "Generic (PLEG): container finished" podID="6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910" containerID="616dca5308755e8442e9b46ff10ad31fc5d023330a20b0b7c2234de4dfd44409" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.298989 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-358f-account-create-4kwkf" event={"ID":"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910","Type":"ContainerDied","Data":"616dca5308755e8442e9b46ff10ad31fc5d023330a20b0b7c2234de4dfd44409"} Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.301958 4767 generic.go:334] "Generic (PLEG): container finished" podID="4ef6da9a-e416-4a02-8507-1a4caabc88c6" containerID="b833bf0b41cb962f95dee8a4b67a1b3dfd1aecdcc88114f7b2f2b08bfa908533" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.302037 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rcbjg" event={"ID":"4ef6da9a-e416-4a02-8507-1a4caabc88c6","Type":"ContainerDied","Data":"b833bf0b41cb962f95dee8a4b67a1b3dfd1aecdcc88114f7b2f2b08bfa908533"} Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.311400 4767 generic.go:334] "Generic (PLEG): container finished" podID="cd9e387f-20cc-4618-915a-bf9a33b40ddd" containerID="19796438dd1357f69a0b3d3d0895eec9b7adfcadbd8a8ad951fae7b36d6f06b0" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.312676 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-48vjg" event={"ID":"cd9e387f-20cc-4618-915a-bf9a33b40ddd","Type":"ContainerDied","Data":"19796438dd1357f69a0b3d3d0895eec9b7adfcadbd8a8ad951fae7b36d6f06b0"} Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.618351 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.762540 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks755\" (UniqueName: \"kubernetes.io/projected/0863e69a-b331-4647-a79c-d0a2e182f14d-kube-api-access-ks755\") pod \"0863e69a-b331-4647-a79c-d0a2e182f14d\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.762991 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0863e69a-b331-4647-a79c-d0a2e182f14d-operator-scripts\") pod \"0863e69a-b331-4647-a79c-d0a2e182f14d\" (UID: \"0863e69a-b331-4647-a79c-d0a2e182f14d\") " Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.763591 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0863e69a-b331-4647-a79c-d0a2e182f14d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0863e69a-b331-4647-a79c-d0a2e182f14d" (UID: "0863e69a-b331-4647-a79c-d0a2e182f14d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.764166 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0863e69a-b331-4647-a79c-d0a2e182f14d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.877623 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.903542 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0863e69a-b331-4647-a79c-d0a2e182f14d-kube-api-access-ks755" (OuterVolumeSpecName: "kube-api-access-ks755") pod "0863e69a-b331-4647-a79c-d0a2e182f14d" (UID: "0863e69a-b331-4647-a79c-d0a2e182f14d"). InnerVolumeSpecName "kube-api-access-ks755". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:41 crc kubenswrapper[4767]: I1124 21:54:41.967323 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks755\" (UniqueName: \"kubernetes.io/projected/0863e69a-b331-4647-a79c-d0a2e182f14d-kube-api-access-ks755\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.010987 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.039419 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.068333 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-operator-scripts\") pod \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.068558 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wsh6\" (UniqueName: \"kubernetes.io/projected/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-kube-api-access-9wsh6\") pod \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\" (UID: \"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.069218 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec" (UID: "ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.074715 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-kube-api-access-9wsh6" (OuterVolumeSpecName: "kube-api-access-9wsh6") pod "ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec" (UID: "ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec"). InnerVolumeSpecName "kube-api-access-9wsh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.170633 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9e387f-20cc-4618-915a-bf9a33b40ddd-operator-scripts\") pod \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.170712 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a407b22-b744-42f8-9746-30f7b21c8e2b-operator-scripts\") pod \"2a407b22-b744-42f8-9746-30f7b21c8e2b\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.170889 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zwqc\" (UniqueName: \"kubernetes.io/projected/2a407b22-b744-42f8-9746-30f7b21c8e2b-kube-api-access-2zwqc\") pod \"2a407b22-b744-42f8-9746-30f7b21c8e2b\" (UID: \"2a407b22-b744-42f8-9746-30f7b21c8e2b\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.170973 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2gz7\" (UniqueName: \"kubernetes.io/projected/cd9e387f-20cc-4618-915a-bf9a33b40ddd-kube-api-access-k2gz7\") pod \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\" (UID: \"cd9e387f-20cc-4618-915a-bf9a33b40ddd\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.171606 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wsh6\" (UniqueName: \"kubernetes.io/projected/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-kube-api-access-9wsh6\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.171638 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.171728 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9e387f-20cc-4618-915a-bf9a33b40ddd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd9e387f-20cc-4618-915a-bf9a33b40ddd" (UID: "cd9e387f-20cc-4618-915a-bf9a33b40ddd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.174174 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a407b22-b744-42f8-9746-30f7b21c8e2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a407b22-b744-42f8-9746-30f7b21c8e2b" (UID: "2a407b22-b744-42f8-9746-30f7b21c8e2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.175025 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a407b22-b744-42f8-9746-30f7b21c8e2b-kube-api-access-2zwqc" (OuterVolumeSpecName: "kube-api-access-2zwqc") pod "2a407b22-b744-42f8-9746-30f7b21c8e2b" (UID: "2a407b22-b744-42f8-9746-30f7b21c8e2b"). InnerVolumeSpecName "kube-api-access-2zwqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.177400 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9e387f-20cc-4618-915a-bf9a33b40ddd-kube-api-access-k2gz7" (OuterVolumeSpecName: "kube-api-access-k2gz7") pod "cd9e387f-20cc-4618-915a-bf9a33b40ddd" (UID: "cd9e387f-20cc-4618-915a-bf9a33b40ddd"). InnerVolumeSpecName "kube-api-access-k2gz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.273157 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.273355 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zwqc\" (UniqueName: \"kubernetes.io/projected/2a407b22-b744-42f8-9746-30f7b21c8e2b-kube-api-access-2zwqc\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.273368 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2gz7\" (UniqueName: \"kubernetes.io/projected/cd9e387f-20cc-4618-915a-bf9a33b40ddd-kube-api-access-k2gz7\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.273378 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd9e387f-20cc-4618-915a-bf9a33b40ddd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.273388 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a407b22-b744-42f8-9746-30f7b21c8e2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: E1124 21:54:42.273827 4767 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:54:42 crc kubenswrapper[4767]: E1124 21:54:42.273945 4767 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:54:42 crc kubenswrapper[4767]: E1124 21:54:42.274226 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift podName:db319bac-943e-4baa-afb0-2089513c8935 nodeName:}" failed. No retries permitted until 2025-11-24 21:54:58.27413091 +0000 UTC m=+981.191114432 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift") pod "swift-storage-0" (UID: "db319bac-943e-4baa-afb0-2089513c8935") : configmap "swift-ring-files" not found Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.326359 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2grc5" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.332405 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7e-account-create-vv26x" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.333619 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sh7l" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.343491 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-48vjg" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.351986 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2grc5" event={"ID":"0863e69a-b331-4647-a79c-d0a2e182f14d","Type":"ContainerDied","Data":"a727d481cc436bcb5035eda95fd2ae90bac7b59e33c0505d147883dbdd71636d"} Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.352032 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a727d481cc436bcb5035eda95fd2ae90bac7b59e33c0505d147883dbdd71636d" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.352045 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7e-account-create-vv26x" event={"ID":"2a407b22-b744-42f8-9746-30f7b21c8e2b","Type":"ContainerDied","Data":"af0f9f5b42bfae4b91e13bf8a711449263378d38b82f3228687ea0b8cb56ca4c"} Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.352056 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af0f9f5b42bfae4b91e13bf8a711449263378d38b82f3228687ea0b8cb56ca4c" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.352064 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9sh7l" event={"ID":"ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec","Type":"ContainerDied","Data":"28f24d94d6438d3d6d28331531b7753d34c5acf152e1dacbdc7723b85e6b2ed6"} Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.352076 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f24d94d6438d3d6d28331531b7753d34c5acf152e1dacbdc7723b85e6b2ed6" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.352084 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-48vjg" event={"ID":"cd9e387f-20cc-4618-915a-bf9a33b40ddd","Type":"ContainerDied","Data":"67a036bccb618ec0cb0b23a9315b5d452d67db54506c5ba2d77eab6c9c1b66b4"} Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.352092 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a036bccb618ec0cb0b23a9315b5d452d67db54506c5ba2d77eab6c9c1b66b4" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.776873 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.895071 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt2xz\" (UniqueName: \"kubernetes.io/projected/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-kube-api-access-jt2xz\") pod \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.895219 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-operator-scripts\") pod \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\" (UID: \"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda\") " Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.896435 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda" (UID: "92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.903888 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-kube-api-access-jt2xz" (OuterVolumeSpecName: "kube-api-access-jt2xz") pod "92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda" (UID: "92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda"). InnerVolumeSpecName "kube-api-access-jt2xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.944005 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.949037 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.965169 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.997060 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt2xz\" (UniqueName: \"kubernetes.io/projected/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-kube-api-access-jt2xz\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:42 crc kubenswrapper[4767]: I1124 21:54:42.997096 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.097822 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8jfn\" (UniqueName: \"kubernetes.io/projected/4ef6da9a-e416-4a02-8507-1a4caabc88c6-kube-api-access-w8jfn\") pod \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.097910 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-operator-scripts\") pod \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.097952 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6da9a-e416-4a02-8507-1a4caabc88c6-operator-scripts\") pod \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\" (UID: \"4ef6da9a-e416-4a02-8507-1a4caabc88c6\") " Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.098004 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ce5857-f490-47b9-b07d-ecf4d1aa2045-operator-scripts\") pod \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.098089 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn4q8\" (UniqueName: \"kubernetes.io/projected/88ce5857-f490-47b9-b07d-ecf4d1aa2045-kube-api-access-vn4q8\") pod \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\" (UID: \"88ce5857-f490-47b9-b07d-ecf4d1aa2045\") " Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.098129 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cq4v\" (UniqueName: \"kubernetes.io/projected/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-kube-api-access-8cq4v\") pod \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\" (UID: \"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910\") " Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.098911 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef6da9a-e416-4a02-8507-1a4caabc88c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ef6da9a-e416-4a02-8507-1a4caabc88c6" (UID: "4ef6da9a-e416-4a02-8507-1a4caabc88c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.099008 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ce5857-f490-47b9-b07d-ecf4d1aa2045-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88ce5857-f490-47b9-b07d-ecf4d1aa2045" (UID: "88ce5857-f490-47b9-b07d-ecf4d1aa2045"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.099353 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910" (UID: "6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.102057 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-kube-api-access-8cq4v" (OuterVolumeSpecName: "kube-api-access-8cq4v") pod "6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910" (UID: "6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910"). InnerVolumeSpecName "kube-api-access-8cq4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.102611 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef6da9a-e416-4a02-8507-1a4caabc88c6-kube-api-access-w8jfn" (OuterVolumeSpecName: "kube-api-access-w8jfn") pod "4ef6da9a-e416-4a02-8507-1a4caabc88c6" (UID: "4ef6da9a-e416-4a02-8507-1a4caabc88c6"). InnerVolumeSpecName "kube-api-access-w8jfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.102741 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ce5857-f490-47b9-b07d-ecf4d1aa2045-kube-api-access-vn4q8" (OuterVolumeSpecName: "kube-api-access-vn4q8") pod "88ce5857-f490-47b9-b07d-ecf4d1aa2045" (UID: "88ce5857-f490-47b9-b07d-ecf4d1aa2045"). InnerVolumeSpecName "kube-api-access-vn4q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.200163 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ce5857-f490-47b9-b07d-ecf4d1aa2045-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.200220 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn4q8\" (UniqueName: \"kubernetes.io/projected/88ce5857-f490-47b9-b07d-ecf4d1aa2045-kube-api-access-vn4q8\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.200243 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cq4v\" (UniqueName: \"kubernetes.io/projected/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-kube-api-access-8cq4v\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.200264 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8jfn\" (UniqueName: \"kubernetes.io/projected/4ef6da9a-e416-4a02-8507-1a4caabc88c6-kube-api-access-w8jfn\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.200305 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.200330 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6da9a-e416-4a02-8507-1a4caabc88c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.355969 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerStarted","Data":"cb5a51067de7206023eeef4a91560518ba01577f3f704c9c7e5917433288d26c"} Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.358623 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rcbjg" event={"ID":"4ef6da9a-e416-4a02-8507-1a4caabc88c6","Type":"ContainerDied","Data":"fe079b39362fa60d3be56b6dd5e684d608c03b1c3d0802d671ba049379454523"} Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.358677 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe079b39362fa60d3be56b6dd5e684d608c03b1c3d0802d671ba049379454523" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.358752 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rcbjg" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.361007 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c148-account-create-dm25n" event={"ID":"92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda","Type":"ContainerDied","Data":"7726e2d3c62968ac3ed3b027a546b2b25995dd49a494b8416d2c5f725dc26c45"} Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.361049 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7726e2d3c62968ac3ed3b027a546b2b25995dd49a494b8416d2c5f725dc26c45" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.361123 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c148-account-create-dm25n" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.364901 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-134c-account-create-bkvsg" event={"ID":"88ce5857-f490-47b9-b07d-ecf4d1aa2045","Type":"ContainerDied","Data":"1f6528d1e48f694f9b572bee508ca1566e66c9019d083c7333a24e14f429eeae"} Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.364954 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6528d1e48f694f9b572bee508ca1566e66c9019d083c7333a24e14f429eeae" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.365025 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-134c-account-create-bkvsg" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.373252 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-358f-account-create-4kwkf" event={"ID":"6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910","Type":"ContainerDied","Data":"979260ac3ba0afe83901dd75958d83ff129cbb1ed4de204b96662665e443a0d1"} Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.373369 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-358f-account-create-4kwkf" Nov 24 21:54:43 crc kubenswrapper[4767]: I1124 21:54:43.373380 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979260ac3ba0afe83901dd75958d83ff129cbb1ed4de204b96662665e443a0d1" Nov 24 21:54:46 crc kubenswrapper[4767]: I1124 21:54:46.401404 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerStarted","Data":"965e7d16695881568ebc9dead0feb3dc11c3eb3fee826c06b32c892366474314"} Nov 24 21:54:46 crc kubenswrapper[4767]: I1124 21:54:46.443329 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=6.861603535 podStartE2EDuration="51.4433084s" podCreationTimestamp="2025-11-24 21:53:55 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.506060187 +0000 UTC m=+924.423043559" lastFinishedPulling="2025-11-24 21:54:46.087765052 +0000 UTC m=+969.004748424" observedRunningTime="2025-11-24 21:54:46.435476578 +0000 UTC m=+969.352460000" watchObservedRunningTime="2025-11-24 21:54:46.4433084 +0000 UTC m=+969.360291782" Nov 24 21:54:46 crc kubenswrapper[4767]: I1124 21:54:46.455509 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:54:46 crc kubenswrapper[4767]: I1124 21:54:46.511195 4767 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod7932e662-ab03-4bd6-b360-a21c21c93f1a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod7932e662-ab03-4bd6-b360-a21c21c93f1a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7932e662_ab03_4bd6_b360_a21c21c93f1a.slice" Nov 24 21:54:47 crc kubenswrapper[4767]: I1124 21:54:47.414265 4767 generic.go:334] "Generic (PLEG): container finished" podID="30d319c1-5268-413c-a6db-9d376a2217c3" containerID="b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948" exitCode=0 Nov 24 21:54:47 crc kubenswrapper[4767]: I1124 21:54:47.414424 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30d319c1-5268-413c-a6db-9d376a2217c3","Type":"ContainerDied","Data":"b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948"} Nov 24 21:54:47 crc kubenswrapper[4767]: I1124 21:54:47.417674 4767 generic.go:334] "Generic (PLEG): container finished" podID="084fdc28-199d-44c7-93c8-67792c6f4829" containerID="9a2852d147c26cff0f2ea6dad65d677e38146207ea8fcb744eec52baa03cfb38" exitCode=0 Nov 24 21:54:47 crc kubenswrapper[4767]: I1124 21:54:47.417754 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfdzc" event={"ID":"084fdc28-199d-44c7-93c8-67792c6f4829","Type":"ContainerDied","Data":"9a2852d147c26cff0f2ea6dad65d677e38146207ea8fcb744eec52baa03cfb38"} Nov 24 21:54:47 crc kubenswrapper[4767]: I1124 21:54:47.420551 4767 generic.go:334] "Generic (PLEG): container finished" podID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerID="f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21" exitCode=0 Nov 24 21:54:47 crc kubenswrapper[4767]: I1124 21:54:47.420792 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5c433e97-140e-43fe-aa7b-1bd14d9e78b9","Type":"ContainerDied","Data":"f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21"} Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.192113 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ngft4" podUID="6e7e218a-3550-499e-8337-5940f98af41c" containerName="ovn-controller" probeResult="failure" output=< Nov 24 21:54:48 crc kubenswrapper[4767]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 21:54:48 crc kubenswrapper[4767]: > Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.255443 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.260567 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6bq9m" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306445 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-l8gk2"] Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.306829 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306850 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.306864 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0863e69a-b331-4647-a79c-d0a2e182f14d" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306872 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0863e69a-b331-4647-a79c-d0a2e182f14d" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.306893 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306901 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.306916 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerName="dnsmasq-dns" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306924 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerName="dnsmasq-dns" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.306937 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9e387f-20cc-4618-915a-bf9a33b40ddd" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306946 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9e387f-20cc-4618-915a-bf9a33b40ddd" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.306959 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306968 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.306983 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a407b22-b744-42f8-9746-30f7b21c8e2b" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.306991 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a407b22-b744-42f8-9746-30f7b21c8e2b" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.307009 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6da9a-e416-4a02-8507-1a4caabc88c6" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307018 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6da9a-e416-4a02-8507-1a4caabc88c6" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.307034 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ce5857-f490-47b9-b07d-ecf4d1aa2045" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307042 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ce5857-f490-47b9-b07d-ecf4d1aa2045" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: E1124 21:54:48.307066 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerName="init" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307073 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerName="init" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307310 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307345 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0863e69a-b331-4647-a79c-d0a2e182f14d" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307370 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1ee997-d0ba-4242-9cf8-58e7ac123d86" containerName="dnsmasq-dns" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307383 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ce5857-f490-47b9-b07d-ecf4d1aa2045" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307397 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307413 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a407b22-b744-42f8-9746-30f7b21c8e2b" containerName="mariadb-account-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307443 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307460 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9e387f-20cc-4618-915a-bf9a33b40ddd" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.307473 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef6da9a-e416-4a02-8507-1a4caabc88c6" containerName="mariadb-database-create" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.308242 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.310090 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2c6jp" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.311693 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.327603 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l8gk2"] Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.395954 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-db-sync-config-data\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.395996 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2pn6\" (UniqueName: \"kubernetes.io/projected/5783bdd7-a5b2-4ba7-9aa5-505f01383747-kube-api-access-x2pn6\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.396044 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-config-data\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.396135 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-combined-ca-bundle\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.428986 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30d319c1-5268-413c-a6db-9d376a2217c3","Type":"ContainerStarted","Data":"406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564"} Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.429240 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.431403 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5c433e97-140e-43fe-aa7b-1bd14d9e78b9","Type":"ContainerStarted","Data":"536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b"} Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.431925 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.458574 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.393329002 podStartE2EDuration="1m0.458549868s" podCreationTimestamp="2025-11-24 21:53:48 +0000 UTC" firstStartedPulling="2025-11-24 21:54:00.26661199 +0000 UTC m=+923.183595362" lastFinishedPulling="2025-11-24 21:54:10.331832846 +0000 UTC m=+933.248816228" observedRunningTime="2025-11-24 21:54:48.455558463 +0000 UTC m=+971.372541835" watchObservedRunningTime="2025-11-24 21:54:48.458549868 +0000 UTC m=+971.375533240" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.497509 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.795570128 podStartE2EDuration="1m0.497490643s" podCreationTimestamp="2025-11-24 21:53:48 +0000 UTC" firstStartedPulling="2025-11-24 21:54:01.436448922 +0000 UTC m=+924.353432294" lastFinishedPulling="2025-11-24 21:54:10.138369437 +0000 UTC m=+933.055352809" observedRunningTime="2025-11-24 21:54:48.495887177 +0000 UTC m=+971.412870549" watchObservedRunningTime="2025-11-24 21:54:48.497490643 +0000 UTC m=+971.414474015" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.498160 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-combined-ca-bundle\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.498286 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-db-sync-config-data\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.498314 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2pn6\" (UniqueName: \"kubernetes.io/projected/5783bdd7-a5b2-4ba7-9aa5-505f01383747-kube-api-access-x2pn6\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.498373 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-config-data\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.515234 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ngft4-config-dsvdl"] Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.516888 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.527053 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-config-data\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.527060 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-combined-ca-bundle\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.527333 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-db-sync-config-data\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.556932 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2pn6\" (UniqueName: \"kubernetes.io/projected/5783bdd7-a5b2-4ba7-9aa5-505f01383747-kube-api-access-x2pn6\") pod \"glance-db-sync-l8gk2\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.570151 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.577568 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ngft4-config-dsvdl"] Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.602016 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run-ovn\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.602093 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.602115 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-log-ovn\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.602129 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-scripts\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.602148 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-additional-scripts\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.602171 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pln7t\" (UniqueName: \"kubernetes.io/projected/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-kube-api-access-pln7t\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.630654 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l8gk2" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.704351 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run-ovn\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.704443 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.704465 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-log-ovn\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.704483 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-scripts\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.704498 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-additional-scripts\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.704520 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pln7t\" (UniqueName: \"kubernetes.io/projected/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-kube-api-access-pln7t\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.705423 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run-ovn\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.705431 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-log-ovn\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.705474 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.707867 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-scripts\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.709706 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-additional-scripts\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.727416 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pln7t\" (UniqueName: \"kubernetes.io/projected/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-kube-api-access-pln7t\") pod \"ovn-controller-ngft4-config-dsvdl\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:48 crc kubenswrapper[4767]: I1124 21:54:48.885165 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.055609 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.113162 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-dispersionconf\") pod \"084fdc28-199d-44c7-93c8-67792c6f4829\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.113750 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/084fdc28-199d-44c7-93c8-67792c6f4829-etc-swift\") pod \"084fdc28-199d-44c7-93c8-67792c6f4829\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.113828 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-scripts\") pod \"084fdc28-199d-44c7-93c8-67792c6f4829\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.113922 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7f9q\" (UniqueName: \"kubernetes.io/projected/084fdc28-199d-44c7-93c8-67792c6f4829-kube-api-access-v7f9q\") pod \"084fdc28-199d-44c7-93c8-67792c6f4829\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.113966 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-ring-data-devices\") pod \"084fdc28-199d-44c7-93c8-67792c6f4829\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.114007 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-swiftconf\") pod \"084fdc28-199d-44c7-93c8-67792c6f4829\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.114045 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-combined-ca-bundle\") pod \"084fdc28-199d-44c7-93c8-67792c6f4829\" (UID: \"084fdc28-199d-44c7-93c8-67792c6f4829\") " Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.114986 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "084fdc28-199d-44c7-93c8-67792c6f4829" (UID: "084fdc28-199d-44c7-93c8-67792c6f4829"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.115358 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/084fdc28-199d-44c7-93c8-67792c6f4829-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "084fdc28-199d-44c7-93c8-67792c6f4829" (UID: "084fdc28-199d-44c7-93c8-67792c6f4829"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.121874 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084fdc28-199d-44c7-93c8-67792c6f4829-kube-api-access-v7f9q" (OuterVolumeSpecName: "kube-api-access-v7f9q") pod "084fdc28-199d-44c7-93c8-67792c6f4829" (UID: "084fdc28-199d-44c7-93c8-67792c6f4829"). InnerVolumeSpecName "kube-api-access-v7f9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.126500 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "084fdc28-199d-44c7-93c8-67792c6f4829" (UID: "084fdc28-199d-44c7-93c8-67792c6f4829"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.150145 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "084fdc28-199d-44c7-93c8-67792c6f4829" (UID: "084fdc28-199d-44c7-93c8-67792c6f4829"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.157757 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "084fdc28-199d-44c7-93c8-67792c6f4829" (UID: "084fdc28-199d-44c7-93c8-67792c6f4829"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.166258 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-scripts" (OuterVolumeSpecName: "scripts") pod "084fdc28-199d-44c7-93c8-67792c6f4829" (UID: "084fdc28-199d-44c7-93c8-67792c6f4829"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.219041 4767 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.219072 4767 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/084fdc28-199d-44c7-93c8-67792c6f4829-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.219082 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.219093 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7f9q\" (UniqueName: \"kubernetes.io/projected/084fdc28-199d-44c7-93c8-67792c6f4829-kube-api-access-v7f9q\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.219104 4767 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/084fdc28-199d-44c7-93c8-67792c6f4829-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.219112 4767 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.219120 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084fdc28-199d-44c7-93c8-67792c6f4829-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.440098 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfdzc" Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.440092 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfdzc" event={"ID":"084fdc28-199d-44c7-93c8-67792c6f4829","Type":"ContainerDied","Data":"1156e5d947212ed814e75295c69ae520bcd133ef7db0c8509ec00e3942532cca"} Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.440152 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1156e5d947212ed814e75295c69ae520bcd133ef7db0c8509ec00e3942532cca" Nov 24 21:54:49 crc kubenswrapper[4767]: W1124 21:54:49.509194 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44bdd6bb_a5cd_4e93_8d22_95211ccf53d5.slice/crio-e4ac941aebbf91ad80b264efe77c7f3786e4ffe697c309decd88108d6ab1e97b WatchSource:0}: Error finding container e4ac941aebbf91ad80b264efe77c7f3786e4ffe697c309decd88108d6ab1e97b: Status 404 returned error can't find the container with id e4ac941aebbf91ad80b264efe77c7f3786e4ffe697c309decd88108d6ab1e97b Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.510990 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ngft4-config-dsvdl"] Nov 24 21:54:49 crc kubenswrapper[4767]: I1124 21:54:49.718812 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l8gk2"] Nov 24 21:54:50 crc kubenswrapper[4767]: I1124 21:54:50.449832 4767 generic.go:334] "Generic (PLEG): container finished" podID="44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" containerID="52447f25e25ed7a7fe19296f5a720d733bfe1885d025d991e1c25d2d1e789a46" exitCode=0 Nov 24 21:54:50 crc kubenswrapper[4767]: I1124 21:54:50.449896 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4-config-dsvdl" event={"ID":"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5","Type":"ContainerDied","Data":"52447f25e25ed7a7fe19296f5a720d733bfe1885d025d991e1c25d2d1e789a46"} Nov 24 21:54:50 crc kubenswrapper[4767]: I1124 21:54:50.449965 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4-config-dsvdl" event={"ID":"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5","Type":"ContainerStarted","Data":"e4ac941aebbf91ad80b264efe77c7f3786e4ffe697c309decd88108d6ab1e97b"} Nov 24 21:54:50 crc kubenswrapper[4767]: I1124 21:54:50.451470 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l8gk2" event={"ID":"5783bdd7-a5b2-4ba7-9aa5-505f01383747","Type":"ContainerStarted","Data":"cd483243f880eebdf87bc180b9bda8bad77dffff1b28de0aa4bf4a41aca9a1df"} Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.752150 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869546 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run-ovn\") pod \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869665 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run\") pod \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869675 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" (UID: "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869712 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-log-ovn\") pod \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869753 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" (UID: "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869809 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run" (OuterVolumeSpecName: "var-run") pod "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" (UID: "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869890 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pln7t\" (UniqueName: \"kubernetes.io/projected/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-kube-api-access-pln7t\") pod \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.869927 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-additional-scripts\") pod \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.870045 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-scripts\") pod \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\" (UID: \"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5\") " Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.871061 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" (UID: "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.871342 4767 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.871619 4767 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.871633 4767 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.871329 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-scripts" (OuterVolumeSpecName: "scripts") pod "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" (UID: "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.879644 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-kube-api-access-pln7t" (OuterVolumeSpecName: "kube-api-access-pln7t") pod "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" (UID: "44bdd6bb-a5cd-4e93-8d22-95211ccf53d5"). InnerVolumeSpecName "kube-api-access-pln7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.973026 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.973416 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pln7t\" (UniqueName: \"kubernetes.io/projected/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-kube-api-access-pln7t\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:51 crc kubenswrapper[4767]: I1124 21:54:51.973585 4767 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.473935 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4-config-dsvdl" event={"ID":"44bdd6bb-a5cd-4e93-8d22-95211ccf53d5","Type":"ContainerDied","Data":"e4ac941aebbf91ad80b264efe77c7f3786e4ffe697c309decd88108d6ab1e97b"} Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.474348 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ac941aebbf91ad80b264efe77c7f3786e4ffe697c309decd88108d6ab1e97b" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.474170 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-dsvdl" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.851021 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ngft4-config-dsvdl"] Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.869123 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ngft4-config-dsvdl"] Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.973047 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ngft4-config-jgcjq"] Nov 24 21:54:52 crc kubenswrapper[4767]: E1124 21:54:52.973497 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" containerName="ovn-config" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.973516 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" containerName="ovn-config" Nov 24 21:54:52 crc kubenswrapper[4767]: E1124 21:54:52.973542 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084fdc28-199d-44c7-93c8-67792c6f4829" containerName="swift-ring-rebalance" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.973549 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="084fdc28-199d-44c7-93c8-67792c6f4829" containerName="swift-ring-rebalance" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.973695 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" containerName="ovn-config" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.973706 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="084fdc28-199d-44c7-93c8-67792c6f4829" containerName="swift-ring-rebalance" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.974418 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.980830 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 21:54:52 crc kubenswrapper[4767]: I1124 21:54:52.985491 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ngft4-config-jgcjq"] Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.094814 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-log-ovn\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.094908 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-scripts\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.094933 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-additional-scripts\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.094989 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run-ovn\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.095026 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp992\" (UniqueName: \"kubernetes.io/projected/31b4d983-69ec-478a-b977-cc6d4e4c13e6-kube-api-access-dp992\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.095058 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.181674 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ngft4" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196090 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196177 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-log-ovn\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196245 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-scripts\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196280 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-additional-scripts\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196333 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run-ovn\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196372 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp992\" (UniqueName: \"kubernetes.io/projected/31b4d983-69ec-478a-b977-cc6d4e4c13e6-kube-api-access-dp992\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196526 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196526 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-log-ovn\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.196637 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run-ovn\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.197450 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-additional-scripts\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.198595 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-scripts\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.219029 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp992\" (UniqueName: \"kubernetes.io/projected/31b4d983-69ec-478a-b977-cc6d4e4c13e6-kube-api-access-dp992\") pod \"ovn-controller-ngft4-config-jgcjq\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.294507 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:54:53 crc kubenswrapper[4767]: I1124 21:54:53.732334 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ngft4-config-jgcjq"] Nov 24 21:54:53 crc kubenswrapper[4767]: W1124 21:54:53.746066 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31b4d983_69ec_478a_b977_cc6d4e4c13e6.slice/crio-ff01fd27df6b474b3de48e6cf71ffd17f2897e42df836d65a04cadd163b55642 WatchSource:0}: Error finding container ff01fd27df6b474b3de48e6cf71ffd17f2897e42df836d65a04cadd163b55642: Status 404 returned error can't find the container with id ff01fd27df6b474b3de48e6cf71ffd17f2897e42df836d65a04cadd163b55642 Nov 24 21:54:54 crc kubenswrapper[4767]: I1124 21:54:54.325660 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bdd6bb-a5cd-4e93-8d22-95211ccf53d5" path="/var/lib/kubelet/pods/44bdd6bb-a5cd-4e93-8d22-95211ccf53d5/volumes" Nov 24 21:54:54 crc kubenswrapper[4767]: I1124 21:54:54.497217 4767 generic.go:334] "Generic (PLEG): container finished" podID="31b4d983-69ec-478a-b977-cc6d4e4c13e6" containerID="0f4318efa102b2021ecbb190e993c55ef88e68e1b29c03ab540f8048b98d3c08" exitCode=0 Nov 24 21:54:54 crc kubenswrapper[4767]: I1124 21:54:54.497535 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4-config-jgcjq" event={"ID":"31b4d983-69ec-478a-b977-cc6d4e4c13e6","Type":"ContainerDied","Data":"0f4318efa102b2021ecbb190e993c55ef88e68e1b29c03ab540f8048b98d3c08"} Nov 24 21:54:54 crc kubenswrapper[4767]: I1124 21:54:54.497577 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4-config-jgcjq" event={"ID":"31b4d983-69ec-478a-b977-cc6d4e4c13e6","Type":"ContainerStarted","Data":"ff01fd27df6b474b3de48e6cf71ffd17f2897e42df836d65a04cadd163b55642"} Nov 24 21:54:56 crc kubenswrapper[4767]: I1124 21:54:56.454941 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:54:56 crc kubenswrapper[4767]: I1124 21:54:56.458053 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:54:56 crc kubenswrapper[4767]: I1124 21:54:56.513142 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:54:58 crc kubenswrapper[4767]: I1124 21:54:58.276807 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:58 crc kubenswrapper[4767]: I1124 21:54:58.298433 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db319bac-943e-4baa-afb0-2089513c8935-etc-swift\") pod \"swift-storage-0\" (UID: \"db319bac-943e-4baa-afb0-2089513c8935\") " pod="openstack/swift-storage-0" Nov 24 21:54:58 crc kubenswrapper[4767]: I1124 21:54:58.489390 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 21:54:59 crc kubenswrapper[4767]: I1124 21:54:59.544422 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:54:59 crc kubenswrapper[4767]: I1124 21:54:59.547442 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="thanos-sidecar" containerID="cri-o://965e7d16695881568ebc9dead0feb3dc11c3eb3fee826c06b32c892366474314" gracePeriod=600 Nov 24 21:54:59 crc kubenswrapper[4767]: I1124 21:54:59.547405 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="prometheus" containerID="cri-o://941bf4b90f241ba71fc8a7839ff3dfbd09862e995b99fa8ddd7b52c1bf32771b" gracePeriod=600 Nov 24 21:54:59 crc kubenswrapper[4767]: I1124 21:54:59.547446 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="config-reloader" containerID="cri-o://cb5a51067de7206023eeef4a91560518ba01577f3f704c9c7e5917433288d26c" gracePeriod=600 Nov 24 21:54:59 crc kubenswrapper[4767]: I1124 21:54:59.597034 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:54:59 crc kubenswrapper[4767]: I1124 21:54:59.882471 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 21:55:00 crc kubenswrapper[4767]: I1124 21:55:00.555141 4767 generic.go:334] "Generic (PLEG): container finished" podID="9fa46701-7516-4376-a72b-10c3eca271f8" containerID="965e7d16695881568ebc9dead0feb3dc11c3eb3fee826c06b32c892366474314" exitCode=0 Nov 24 21:55:00 crc kubenswrapper[4767]: I1124 21:55:00.555398 4767 generic.go:334] "Generic (PLEG): container finished" podID="9fa46701-7516-4376-a72b-10c3eca271f8" containerID="cb5a51067de7206023eeef4a91560518ba01577f3f704c9c7e5917433288d26c" exitCode=0 Nov 24 21:55:00 crc kubenswrapper[4767]: I1124 21:55:00.555358 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerDied","Data":"965e7d16695881568ebc9dead0feb3dc11c3eb3fee826c06b32c892366474314"} Nov 24 21:55:00 crc kubenswrapper[4767]: I1124 21:55:00.555470 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerDied","Data":"cb5a51067de7206023eeef4a91560518ba01577f3f704c9c7e5917433288d26c"} Nov 24 21:55:00 crc kubenswrapper[4767]: I1124 21:55:00.555483 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerDied","Data":"941bf4b90f241ba71fc8a7839ff3dfbd09862e995b99fa8ddd7b52c1bf32771b"} Nov 24 21:55:00 crc kubenswrapper[4767]: I1124 21:55:00.555408 4767 generic.go:334] "Generic (PLEG): container finished" podID="9fa46701-7516-4376-a72b-10c3eca271f8" containerID="941bf4b90f241ba71fc8a7839ff3dfbd09862e995b99fa8ddd7b52c1bf32771b" exitCode=0 Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.456378 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.512444 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hjlg6"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.513590 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.537900 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hjlg6"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.557325 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-7wzxw"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.558584 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.561327 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-mrbvr" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.561680 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.572784 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2216-account-create-2hzgb"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.573892 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.582767 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.586052 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-7wzxw"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.602742 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2216-account-create-2hzgb"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.646944 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxc7\" (UniqueName: \"kubernetes.io/projected/d803aeed-f0af-4587-b58d-1e7e8273a21d-kube-api-access-fjxc7\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.647005 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-config-data\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.647040 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-combined-ca-bundle\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.647076 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmks6\" (UniqueName: \"kubernetes.io/projected/96621856-cbd1-4e79-a210-59cb502ba291-kube-api-access-vmks6\") pod \"cinder-db-create-hjlg6\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.647120 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96621856-cbd1-4e79-a210-59cb502ba291-operator-scripts\") pod \"cinder-db-create-hjlg6\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.647143 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-db-sync-config-data\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.659347 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-q7bzm"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.661161 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.675051 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0f65-account-create-h2qbg"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.676613 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.681653 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.682907 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q7bzm"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.705439 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f65-account-create-h2qbg"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756036 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a72f88f-06d7-4a5f-b391-976efcc9ea67-operator-scripts\") pod \"cinder-2216-account-create-2hzgb\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756097 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-config-data\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756138 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-combined-ca-bundle\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756196 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrpv\" (UniqueName: \"kubernetes.io/projected/5a72f88f-06d7-4a5f-b391-976efcc9ea67-kube-api-access-lzrpv\") pod \"cinder-2216-account-create-2hzgb\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756244 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmks6\" (UniqueName: \"kubernetes.io/projected/96621856-cbd1-4e79-a210-59cb502ba291-kube-api-access-vmks6\") pod \"cinder-db-create-hjlg6\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756359 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96621856-cbd1-4e79-a210-59cb502ba291-operator-scripts\") pod \"cinder-db-create-hjlg6\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756404 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3983d70b-b45a-4ee3-a9ef-988fa258635b-operator-scripts\") pod \"barbican-db-create-q7bzm\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756431 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-db-sync-config-data\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756457 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcgx7\" (UniqueName: \"kubernetes.io/projected/3983d70b-b45a-4ee3-a9ef-988fa258635b-kube-api-access-tcgx7\") pod \"barbican-db-create-q7bzm\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.756599 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxc7\" (UniqueName: \"kubernetes.io/projected/d803aeed-f0af-4587-b58d-1e7e8273a21d-kube-api-access-fjxc7\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.762201 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96621856-cbd1-4e79-a210-59cb502ba291-operator-scripts\") pod \"cinder-db-create-hjlg6\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.771165 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-config-data\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.784512 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6dw5c"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.789827 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-combined-ca-bundle\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.792016 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6dw5c"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.792149 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.793723 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxc7\" (UniqueName: \"kubernetes.io/projected/d803aeed-f0af-4587-b58d-1e7e8273a21d-kube-api-access-fjxc7\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.795958 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmks6\" (UniqueName: \"kubernetes.io/projected/96621856-cbd1-4e79-a210-59cb502ba291-kube-api-access-vmks6\") pod \"cinder-db-create-hjlg6\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.797696 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-db-sync-config-data\") pod \"watcher-db-sync-7wzxw\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.831484 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.860235 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cht6x\" (UniqueName: \"kubernetes.io/projected/0f386a17-08d4-4c2d-8727-5171cb4275a5-kube-api-access-cht6x\") pod \"barbican-0f65-account-create-h2qbg\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.860411 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3983d70b-b45a-4ee3-a9ef-988fa258635b-operator-scripts\") pod \"barbican-db-create-q7bzm\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.860483 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcgx7\" (UniqueName: \"kubernetes.io/projected/3983d70b-b45a-4ee3-a9ef-988fa258635b-kube-api-access-tcgx7\") pod \"barbican-db-create-q7bzm\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.860554 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f386a17-08d4-4c2d-8727-5171cb4275a5-operator-scripts\") pod \"barbican-0f65-account-create-h2qbg\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.860823 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a72f88f-06d7-4a5f-b391-976efcc9ea67-operator-scripts\") pod \"cinder-2216-account-create-2hzgb\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.861093 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrpv\" (UniqueName: \"kubernetes.io/projected/5a72f88f-06d7-4a5f-b391-976efcc9ea67-kube-api-access-lzrpv\") pod \"cinder-2216-account-create-2hzgb\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.861132 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3983d70b-b45a-4ee3-a9ef-988fa258635b-operator-scripts\") pod \"barbican-db-create-q7bzm\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.862100 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a72f88f-06d7-4a5f-b391-976efcc9ea67-operator-scripts\") pod \"cinder-2216-account-create-2hzgb\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.889340 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.889482 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcgx7\" (UniqueName: \"kubernetes.io/projected/3983d70b-b45a-4ee3-a9ef-988fa258635b-kube-api-access-tcgx7\") pod \"barbican-db-create-q7bzm\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.902823 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-mj4wm"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.904334 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.907508 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrpv\" (UniqueName: \"kubernetes.io/projected/5a72f88f-06d7-4a5f-b391-976efcc9ea67-kube-api-access-lzrpv\") pod \"cinder-2216-account-create-2hzgb\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.910766 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mj4wm"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.912796 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.913106 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbsgd" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.913259 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.913478 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.958827 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8fbf-account-create-25zrr"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.960096 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.962154 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.965641 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba84d81f-ea11-4c51-81a1-2edfd90b9144-operator-scripts\") pod \"neutron-db-create-6dw5c\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.965727 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cht6x\" (UniqueName: \"kubernetes.io/projected/0f386a17-08d4-4c2d-8727-5171cb4275a5-kube-api-access-cht6x\") pod \"barbican-0f65-account-create-h2qbg\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.965835 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f386a17-08d4-4c2d-8727-5171cb4275a5-operator-scripts\") pod \"barbican-0f65-account-create-h2qbg\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.965862 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmphp\" (UniqueName: \"kubernetes.io/projected/ba84d81f-ea11-4c51-81a1-2edfd90b9144-kube-api-access-rmphp\") pod \"neutron-db-create-6dw5c\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.967086 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f386a17-08d4-4c2d-8727-5171cb4275a5-operator-scripts\") pod \"barbican-0f65-account-create-h2qbg\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.967306 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8fbf-account-create-25zrr"] Nov 24 21:55:01 crc kubenswrapper[4767]: I1124 21:55:01.985908 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cht6x\" (UniqueName: \"kubernetes.io/projected/0f386a17-08d4-4c2d-8727-5171cb4275a5-kube-api-access-cht6x\") pod \"barbican-0f65-account-create-h2qbg\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.015284 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.024751 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.066739 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmphp\" (UniqueName: \"kubernetes.io/projected/ba84d81f-ea11-4c51-81a1-2edfd90b9144-kube-api-access-rmphp\") pod \"neutron-db-create-6dw5c\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.066799 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m88gb\" (UniqueName: \"kubernetes.io/projected/134b8eee-26a9-42c6-adec-2ac29ee455ed-kube-api-access-m88gb\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.066841 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677m7\" (UniqueName: \"kubernetes.io/projected/ab05c5db-4946-423d-8123-d76eaa3f716a-kube-api-access-677m7\") pod \"neutron-8fbf-account-create-25zrr\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.066879 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-config-data\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.066912 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba84d81f-ea11-4c51-81a1-2edfd90b9144-operator-scripts\") pod \"neutron-db-create-6dw5c\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.066934 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-combined-ca-bundle\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.066957 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab05c5db-4946-423d-8123-d76eaa3f716a-operator-scripts\") pod \"neutron-8fbf-account-create-25zrr\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.074163 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba84d81f-ea11-4c51-81a1-2edfd90b9144-operator-scripts\") pod \"neutron-db-create-6dw5c\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.085900 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmphp\" (UniqueName: \"kubernetes.io/projected/ba84d81f-ea11-4c51-81a1-2edfd90b9144-kube-api-access-rmphp\") pod \"neutron-db-create-6dw5c\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.168959 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.169601 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-config-data\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.169848 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-combined-ca-bundle\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.170631 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab05c5db-4946-423d-8123-d76eaa3f716a-operator-scripts\") pod \"neutron-8fbf-account-create-25zrr\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.171723 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab05c5db-4946-423d-8123-d76eaa3f716a-operator-scripts\") pod \"neutron-8fbf-account-create-25zrr\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.173046 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m88gb\" (UniqueName: \"kubernetes.io/projected/134b8eee-26a9-42c6-adec-2ac29ee455ed-kube-api-access-m88gb\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.173929 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-config-data\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.174200 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-677m7\" (UniqueName: \"kubernetes.io/projected/ab05c5db-4946-423d-8123-d76eaa3f716a-kube-api-access-677m7\") pod \"neutron-8fbf-account-create-25zrr\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.182953 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-combined-ca-bundle\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.191631 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m88gb\" (UniqueName: \"kubernetes.io/projected/134b8eee-26a9-42c6-adec-2ac29ee455ed-kube-api-access-m88gb\") pod \"keystone-db-sync-mj4wm\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.192834 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-677m7\" (UniqueName: \"kubernetes.io/projected/ab05c5db-4946-423d-8123-d76eaa3f716a-kube-api-access-677m7\") pod \"neutron-8fbf-account-create-25zrr\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.194906 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.282379 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:02 crc kubenswrapper[4767]: I1124 21:55:02.322727 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.585105 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ngft4-config-jgcjq" event={"ID":"31b4d983-69ec-478a-b977-cc6d4e4c13e6","Type":"ContainerDied","Data":"ff01fd27df6b474b3de48e6cf71ffd17f2897e42df836d65a04cadd163b55642"} Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.585707 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff01fd27df6b474b3de48e6cf71ffd17f2897e42df836d65a04cadd163b55642" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.591299 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702000 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run\") pod \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702061 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-scripts\") pod \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702108 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-additional-scripts\") pod \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702091 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run" (OuterVolumeSpecName: "var-run") pod "31b4d983-69ec-478a-b977-cc6d4e4c13e6" (UID: "31b4d983-69ec-478a-b977-cc6d4e4c13e6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702147 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-log-ovn\") pod \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702227 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run-ovn\") pod \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702292 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp992\" (UniqueName: \"kubernetes.io/projected/31b4d983-69ec-478a-b977-cc6d4e4c13e6-kube-api-access-dp992\") pod \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\" (UID: \"31b4d983-69ec-478a-b977-cc6d4e4c13e6\") " Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702310 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "31b4d983-69ec-478a-b977-cc6d4e4c13e6" (UID: "31b4d983-69ec-478a-b977-cc6d4e4c13e6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702400 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "31b4d983-69ec-478a-b977-cc6d4e4c13e6" (UID: "31b4d983-69ec-478a-b977-cc6d4e4c13e6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702702 4767 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702717 4767 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702726 4767 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b4d983-69ec-478a-b977-cc6d4e4c13e6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.702970 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "31b4d983-69ec-478a-b977-cc6d4e4c13e6" (UID: "31b4d983-69ec-478a-b977-cc6d4e4c13e6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.703250 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-scripts" (OuterVolumeSpecName: "scripts") pod "31b4d983-69ec-478a-b977-cc6d4e4c13e6" (UID: "31b4d983-69ec-478a-b977-cc6d4e4c13e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.706489 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b4d983-69ec-478a-b977-cc6d4e4c13e6-kube-api-access-dp992" (OuterVolumeSpecName: "kube-api-access-dp992") pod "31b4d983-69ec-478a-b977-cc6d4e4c13e6" (UID: "31b4d983-69ec-478a-b977-cc6d4e4c13e6"). InnerVolumeSpecName "kube-api-access-dp992". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.804696 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.805017 4767 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31b4d983-69ec-478a-b977-cc6d4e4c13e6-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.805029 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp992\" (UniqueName: \"kubernetes.io/projected/31b4d983-69ec-478a-b977-cc6d4e4c13e6-kube-api-access-dp992\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:03 crc kubenswrapper[4767]: I1124 21:55:03.930595 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.109727 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-web-config\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.109778 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb99v\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-kube-api-access-hb99v\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.109825 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fa46701-7516-4376-a72b-10c3eca271f8-config-out\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.109859 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-config\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.109982 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-thanos-prometheus-http-client-file\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.110937 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.111021 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-tls-assets\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.111069 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9fa46701-7516-4376-a72b-10c3eca271f8-prometheus-metric-storage-rulefiles-0\") pod \"9fa46701-7516-4376-a72b-10c3eca271f8\" (UID: \"9fa46701-7516-4376-a72b-10c3eca271f8\") " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.112448 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa46701-7516-4376-a72b-10c3eca271f8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.112792 4767 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9fa46701-7516-4376-a72b-10c3eca271f8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.116657 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa46701-7516-4376-a72b-10c3eca271f8-config-out" (OuterVolumeSpecName: "config-out") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.117729 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-kube-api-access-hb99v" (OuterVolumeSpecName: "kube-api-access-hb99v") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "kube-api-access-hb99v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.150537 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "pvc-71f905aa-f502-4da2-b361-dd72fb27e489". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.152852 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.152913 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-config" (OuterVolumeSpecName: "config") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.154869 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.175566 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-web-config" (OuterVolumeSpecName: "web-config") pod "9fa46701-7516-4376-a72b-10c3eca271f8" (UID: "9fa46701-7516-4376-a72b-10c3eca271f8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.214753 4767 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.214806 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") on node \"crc\" " Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.214839 4767 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.214849 4767 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.214859 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb99v\" (UniqueName: \"kubernetes.io/projected/9fa46701-7516-4376-a72b-10c3eca271f8-kube-api-access-hb99v\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.214872 4767 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fa46701-7516-4376-a72b-10c3eca271f8-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.214881 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9fa46701-7516-4376-a72b-10c3eca271f8-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.242402 4767 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.242562 4767 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-71f905aa-f502-4da2-b361-dd72fb27e489" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489") on node "crc" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.294529 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2216-account-create-2hzgb"] Nov 24 21:55:04 crc kubenswrapper[4767]: W1124 21:55:04.301759 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba84d81f_ea11_4c51_81a1_2edfd90b9144.slice/crio-0c497335b1190d5c2521704c110b8e24305ad7fcd58121efdd2b30d28433990e WatchSource:0}: Error finding container 0c497335b1190d5c2521704c110b8e24305ad7fcd58121efdd2b30d28433990e: Status 404 returned error can't find the container with id 0c497335b1190d5c2521704c110b8e24305ad7fcd58121efdd2b30d28433990e Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.302129 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6dw5c"] Nov 24 21:55:04 crc kubenswrapper[4767]: W1124 21:55:04.304610 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod134b8eee_26a9_42c6_adec_2ac29ee455ed.slice/crio-2997a8767797a9b30ca21d4764dde6f227c1425e6d0a7ab235593845c5b841a3 WatchSource:0}: Error finding container 2997a8767797a9b30ca21d4764dde6f227c1425e6d0a7ab235593845c5b841a3: Status 404 returned error can't find the container with id 2997a8767797a9b30ca21d4764dde6f227c1425e6d0a7ab235593845c5b841a3 Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.321301 4767 reconciler_common.go:293] "Volume detached for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.349737 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mj4wm"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.471262 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q7bzm"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.503972 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-7wzxw"] Nov 24 21:55:04 crc kubenswrapper[4767]: W1124 21:55:04.507889 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd803aeed_f0af_4587_b58d_1e7e8273a21d.slice/crio-8d4af741b01f228751cb73ee68932548c4c0e144396e8830daf5bae0614d1811 WatchSource:0}: Error finding container 8d4af741b01f228751cb73ee68932548c4c0e144396e8830daf5bae0614d1811: Status 404 returned error can't find the container with id 8d4af741b01f228751cb73ee68932548c4c0e144396e8830daf5bae0614d1811 Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.568494 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:55:04 crc kubenswrapper[4767]: W1124 21:55:04.586858 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb319bac_943e_4baa_afb0_2089513c8935.slice/crio-df58fdfd4a65c6dc0e99046087ac6aebd559345b0fd9f07346a9e5d3db6b4a8c WatchSource:0}: Error finding container df58fdfd4a65c6dc0e99046087ac6aebd559345b0fd9f07346a9e5d3db6b4a8c: Status 404 returned error can't find the container with id df58fdfd4a65c6dc0e99046087ac6aebd559345b0fd9f07346a9e5d3db6b4a8c Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.602353 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l8gk2" event={"ID":"5783bdd7-a5b2-4ba7-9aa5-505f01383747","Type":"ContainerStarted","Data":"b9c06f9935a37f32def59b3dc1b5eecbc75dc1a47de5f0aeb0da629b1b23a0bf"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.606071 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mj4wm" event={"ID":"134b8eee-26a9-42c6-adec-2ac29ee455ed","Type":"ContainerStarted","Data":"2997a8767797a9b30ca21d4764dde6f227c1425e6d0a7ab235593845c5b841a3"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.609060 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7wzxw" event={"ID":"d803aeed-f0af-4587-b58d-1e7e8273a21d","Type":"ContainerStarted","Data":"8d4af741b01f228751cb73ee68932548c4c0e144396e8830daf5bae0614d1811"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.624935 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hjlg6"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.632068 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6dw5c" event={"ID":"ba84d81f-ea11-4c51-81a1-2edfd90b9144","Type":"ContainerStarted","Data":"d33a7c4841c736ab51634b28e03dfcbf0ebfd75f39c7c627851d89b8ad7ea51f"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.632127 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6dw5c" event={"ID":"ba84d81f-ea11-4c51-81a1-2edfd90b9144","Type":"ContainerStarted","Data":"0c497335b1190d5c2521704c110b8e24305ad7fcd58121efdd2b30d28433990e"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.642919 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8fbf-account-create-25zrr"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.648156 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2216-account-create-2hzgb" event={"ID":"5a72f88f-06d7-4a5f-b391-976efcc9ea67","Type":"ContainerStarted","Data":"8d2ec0fe14f7fea0a3cc95b384f4f5f3851e067b1383bdd149326708f1b1038e"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.648215 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2216-account-create-2hzgb" event={"ID":"5a72f88f-06d7-4a5f-b391-976efcc9ea67","Type":"ContainerStarted","Data":"013965f69455715e36e67fe42cb55abfaad696a902810ac97acabfef563994b3"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.659717 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-l8gk2" podStartSLOduration=2.745746286 podStartE2EDuration="16.659698981s" podCreationTimestamp="2025-11-24 21:54:48 +0000 UTC" firstStartedPulling="2025-11-24 21:54:49.722853503 +0000 UTC m=+972.639836875" lastFinishedPulling="2025-11-24 21:55:03.636806198 +0000 UTC m=+986.553789570" observedRunningTime="2025-11-24 21:55:04.623916565 +0000 UTC m=+987.540899967" watchObservedRunningTime="2025-11-24 21:55:04.659698981 +0000 UTC m=+987.576682343" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.684708 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9fa46701-7516-4376-a72b-10c3eca271f8","Type":"ContainerDied","Data":"4c989c1fd50ece380f889af23b497792f31c5e8e5470776034acbf2c2bcb9e28"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.684831 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.684881 4767 scope.go:117] "RemoveContainer" containerID="965e7d16695881568ebc9dead0feb3dc11c3eb3fee826c06b32c892366474314" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.688148 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ngft4-config-jgcjq" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.688235 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q7bzm" event={"ID":"3983d70b-b45a-4ee3-a9ef-988fa258635b","Type":"ContainerStarted","Data":"8287032d28b0f1c53325db8f3a8b75f6ae926f8bd1d6eeed7e4618417b051550"} Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.713619 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f65-account-create-h2qbg"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.715775 4767 scope.go:117] "RemoveContainer" containerID="cb5a51067de7206023eeef4a91560518ba01577f3f704c9c7e5917433288d26c" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.719341 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-6dw5c" podStartSLOduration=3.719326814 podStartE2EDuration="3.719326814s" podCreationTimestamp="2025-11-24 21:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:04.654483773 +0000 UTC m=+987.571467145" watchObservedRunningTime="2025-11-24 21:55:04.719326814 +0000 UTC m=+987.636310186" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.730729 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2216-account-create-2hzgb" podStartSLOduration=3.730705177 podStartE2EDuration="3.730705177s" podCreationTimestamp="2025-11-24 21:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:04.668172141 +0000 UTC m=+987.585155513" watchObservedRunningTime="2025-11-24 21:55:04.730705177 +0000 UTC m=+987.647688549" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.739789 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ngft4-config-jgcjq"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.758840 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ngft4-config-jgcjq"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.770090 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.777894 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.794367 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:55:04 crc kubenswrapper[4767]: E1124 21:55:04.794835 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="thanos-sidecar" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.794847 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="thanos-sidecar" Nov 24 21:55:04 crc kubenswrapper[4767]: E1124 21:55:04.794859 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="prometheus" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.794865 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="prometheus" Nov 24 21:55:04 crc kubenswrapper[4767]: E1124 21:55:04.794882 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b4d983-69ec-478a-b977-cc6d4e4c13e6" containerName="ovn-config" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.794888 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b4d983-69ec-478a-b977-cc6d4e4c13e6" containerName="ovn-config" Nov 24 21:55:04 crc kubenswrapper[4767]: E1124 21:55:04.794901 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="init-config-reloader" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.794908 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="init-config-reloader" Nov 24 21:55:04 crc kubenswrapper[4767]: E1124 21:55:04.794927 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="config-reloader" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.794943 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="config-reloader" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.795116 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="prometheus" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.795124 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b4d983-69ec-478a-b977-cc6d4e4c13e6" containerName="ovn-config" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.795139 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="config-reloader" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.795145 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" containerName="thanos-sidecar" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.796834 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.800972 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-mmxtp" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.801205 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.801333 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.801457 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.801554 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.801579 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.807917 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.824671 4767 scope.go:117] "RemoveContainer" containerID="941bf4b90f241ba71fc8a7839ff3dfbd09862e995b99fa8ddd7b52c1bf32771b" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.827544 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941478 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941568 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941608 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941628 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/825cb17a-68e9-412d-829f-88001f53782c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941765 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-config\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941908 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/825cb17a-68e9-412d-829f-88001f53782c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941936 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.941965 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm8ts\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-kube-api-access-dm8ts\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.942114 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.942214 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:04 crc kubenswrapper[4767]: I1124 21:55:04.942281 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043572 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/825cb17a-68e9-412d-829f-88001f53782c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043640 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043670 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm8ts\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-kube-api-access-dm8ts\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043723 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043773 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043821 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043855 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.043897 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.044243 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.044288 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/825cb17a-68e9-412d-829f-88001f53782c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.044330 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-config\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.045383 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/825cb17a-68e9-412d-829f-88001f53782c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.047911 4767 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.047952 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b4c963982fee8444440b339c0b04b674e3a0c1d34dde87d25887f0d341e5df1/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.050361 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/825cb17a-68e9-412d-829f-88001f53782c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.050621 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.051243 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.052944 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.052949 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.054005 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.054564 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-config\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.079780 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm8ts\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-kube-api-access-dm8ts\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.085955 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.091082 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.212611 4767 scope.go:117] "RemoveContainer" containerID="14ef125f4c3d314c8a699b386e82bb5e988d1e9e0cdcdf681db1f9091fed3375" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.241133 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.719485 4767 generic.go:334] "Generic (PLEG): container finished" podID="3983d70b-b45a-4ee3-a9ef-988fa258635b" containerID="197e94a1f4a5772c03d3bbaa91156fc7a8eb52691a7c6cb5d23e26f534591f9c" exitCode=0 Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.719659 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q7bzm" event={"ID":"3983d70b-b45a-4ee3-a9ef-988fa258635b","Type":"ContainerDied","Data":"197e94a1f4a5772c03d3bbaa91156fc7a8eb52691a7c6cb5d23e26f534591f9c"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.726993 4767 generic.go:334] "Generic (PLEG): container finished" podID="96621856-cbd1-4e79-a210-59cb502ba291" containerID="ab586574f3248cde5b18ec034686b4f4f72bf6ee64a175292a85f0de931b3a7b" exitCode=0 Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.727065 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hjlg6" event={"ID":"96621856-cbd1-4e79-a210-59cb502ba291","Type":"ContainerDied","Data":"ab586574f3248cde5b18ec034686b4f4f72bf6ee64a175292a85f0de931b3a7b"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.727094 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hjlg6" event={"ID":"96621856-cbd1-4e79-a210-59cb502ba291","Type":"ContainerStarted","Data":"54ff10902607c1676b249187c5201a338aeb7e007efff5bb45823fab9a8da045"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.732388 4767 generic.go:334] "Generic (PLEG): container finished" podID="ba84d81f-ea11-4c51-81a1-2edfd90b9144" containerID="d33a7c4841c736ab51634b28e03dfcbf0ebfd75f39c7c627851d89b8ad7ea51f" exitCode=0 Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.732625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6dw5c" event={"ID":"ba84d81f-ea11-4c51-81a1-2edfd90b9144","Type":"ContainerDied","Data":"d33a7c4841c736ab51634b28e03dfcbf0ebfd75f39c7c627851d89b8ad7ea51f"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.739491 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f65-account-create-h2qbg" event={"ID":"0f386a17-08d4-4c2d-8727-5171cb4275a5","Type":"ContainerStarted","Data":"acc336df61ba28e5aaea71da2df7976f80c2cfa1176bed7636a5a824455ad4af"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.739554 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f65-account-create-h2qbg" event={"ID":"0f386a17-08d4-4c2d-8727-5171cb4275a5","Type":"ContainerStarted","Data":"e6613709e9f6f80ac9c6170941928469faaa0fd579c551692711d1f7065ae617"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.742755 4767 generic.go:334] "Generic (PLEG): container finished" podID="5a72f88f-06d7-4a5f-b391-976efcc9ea67" containerID="8d2ec0fe14f7fea0a3cc95b384f4f5f3851e067b1383bdd149326708f1b1038e" exitCode=0 Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.742814 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2216-account-create-2hzgb" event={"ID":"5a72f88f-06d7-4a5f-b391-976efcc9ea67","Type":"ContainerDied","Data":"8d2ec0fe14f7fea0a3cc95b384f4f5f3851e067b1383bdd149326708f1b1038e"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.770507 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fbf-account-create-25zrr" event={"ID":"ab05c5db-4946-423d-8123-d76eaa3f716a","Type":"ContainerDied","Data":"33f9aa175322eae694e4347f835d3a61ce610abcf2358b8f2b380f614d1b7f79"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.770480 4767 generic.go:334] "Generic (PLEG): container finished" podID="ab05c5db-4946-423d-8123-d76eaa3f716a" containerID="33f9aa175322eae694e4347f835d3a61ce610abcf2358b8f2b380f614d1b7f79" exitCode=0 Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.771066 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fbf-account-create-25zrr" event={"ID":"ab05c5db-4946-423d-8123-d76eaa3f716a","Type":"ContainerStarted","Data":"490cfe0add39cdaf01794a0910aa952f1f4c128d90e9e71d47c01dcd444d6090"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.782539 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"df58fdfd4a65c6dc0e99046087ac6aebd559345b0fd9f07346a9e5d3db6b4a8c"} Nov 24 21:55:05 crc kubenswrapper[4767]: I1124 21:55:05.853817 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:55:05 crc kubenswrapper[4767]: W1124 21:55:05.858814 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod825cb17a_68e9_412d_829f_88001f53782c.slice/crio-7b3fd4452055af52d38f4fc7a0317dc004c3938640399d24e65b842f104b2336 WatchSource:0}: Error finding container 7b3fd4452055af52d38f4fc7a0317dc004c3938640399d24e65b842f104b2336: Status 404 returned error can't find the container with id 7b3fd4452055af52d38f4fc7a0317dc004c3938640399d24e65b842f104b2336 Nov 24 21:55:06 crc kubenswrapper[4767]: I1124 21:55:06.328371 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b4d983-69ec-478a-b977-cc6d4e4c13e6" path="/var/lib/kubelet/pods/31b4d983-69ec-478a-b977-cc6d4e4c13e6/volumes" Nov 24 21:55:06 crc kubenswrapper[4767]: I1124 21:55:06.329438 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa46701-7516-4376-a72b-10c3eca271f8" path="/var/lib/kubelet/pods/9fa46701-7516-4376-a72b-10c3eca271f8/volumes" Nov 24 21:55:06 crc kubenswrapper[4767]: I1124 21:55:06.821909 4767 generic.go:334] "Generic (PLEG): container finished" podID="0f386a17-08d4-4c2d-8727-5171cb4275a5" containerID="acc336df61ba28e5aaea71da2df7976f80c2cfa1176bed7636a5a824455ad4af" exitCode=0 Nov 24 21:55:06 crc kubenswrapper[4767]: I1124 21:55:06.822021 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f65-account-create-h2qbg" event={"ID":"0f386a17-08d4-4c2d-8727-5171cb4275a5","Type":"ContainerDied","Data":"acc336df61ba28e5aaea71da2df7976f80c2cfa1176bed7636a5a824455ad4af"} Nov 24 21:55:06 crc kubenswrapper[4767]: I1124 21:55:06.826590 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerStarted","Data":"7b3fd4452055af52d38f4fc7a0317dc004c3938640399d24e65b842f104b2336"} Nov 24 21:55:08 crc kubenswrapper[4767]: I1124 21:55:08.847780 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerStarted","Data":"387ec5927bfb3e773b99fea7bd24a3cffb7e069f3f48032c3a149150d7a6bdc1"} Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.399587 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.407353 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.555175 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cht6x\" (UniqueName: \"kubernetes.io/projected/0f386a17-08d4-4c2d-8727-5171cb4275a5-kube-api-access-cht6x\") pod \"0f386a17-08d4-4c2d-8727-5171cb4275a5\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.555226 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a72f88f-06d7-4a5f-b391-976efcc9ea67-operator-scripts\") pod \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.555440 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f386a17-08d4-4c2d-8727-5171cb4275a5-operator-scripts\") pod \"0f386a17-08d4-4c2d-8727-5171cb4275a5\" (UID: \"0f386a17-08d4-4c2d-8727-5171cb4275a5\") " Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.555513 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrpv\" (UniqueName: \"kubernetes.io/projected/5a72f88f-06d7-4a5f-b391-976efcc9ea67-kube-api-access-lzrpv\") pod \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\" (UID: \"5a72f88f-06d7-4a5f-b391-976efcc9ea67\") " Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.555730 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a72f88f-06d7-4a5f-b391-976efcc9ea67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a72f88f-06d7-4a5f-b391-976efcc9ea67" (UID: "5a72f88f-06d7-4a5f-b391-976efcc9ea67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.555894 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a72f88f-06d7-4a5f-b391-976efcc9ea67-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.555988 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f386a17-08d4-4c2d-8727-5171cb4275a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f386a17-08d4-4c2d-8727-5171cb4275a5" (UID: "0f386a17-08d4-4c2d-8727-5171cb4275a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.561248 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a72f88f-06d7-4a5f-b391-976efcc9ea67-kube-api-access-lzrpv" (OuterVolumeSpecName: "kube-api-access-lzrpv") pod "5a72f88f-06d7-4a5f-b391-976efcc9ea67" (UID: "5a72f88f-06d7-4a5f-b391-976efcc9ea67"). InnerVolumeSpecName "kube-api-access-lzrpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.561367 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f386a17-08d4-4c2d-8727-5171cb4275a5-kube-api-access-cht6x" (OuterVolumeSpecName: "kube-api-access-cht6x") pod "0f386a17-08d4-4c2d-8727-5171cb4275a5" (UID: "0f386a17-08d4-4c2d-8727-5171cb4275a5"). InnerVolumeSpecName "kube-api-access-cht6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.657201 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cht6x\" (UniqueName: \"kubernetes.io/projected/0f386a17-08d4-4c2d-8727-5171cb4275a5-kube-api-access-cht6x\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.657244 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f386a17-08d4-4c2d-8727-5171cb4275a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.657258 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzrpv\" (UniqueName: \"kubernetes.io/projected/5a72f88f-06d7-4a5f-b391-976efcc9ea67-kube-api-access-lzrpv\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.857706 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f65-account-create-h2qbg" event={"ID":"0f386a17-08d4-4c2d-8727-5171cb4275a5","Type":"ContainerDied","Data":"e6613709e9f6f80ac9c6170941928469faaa0fd579c551692711d1f7065ae617"} Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.857754 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6613709e9f6f80ac9c6170941928469faaa0fd579c551692711d1f7065ae617" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.857806 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f65-account-create-h2qbg" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.860256 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2216-account-create-2hzgb" event={"ID":"5a72f88f-06d7-4a5f-b391-976efcc9ea67","Type":"ContainerDied","Data":"013965f69455715e36e67fe42cb55abfaad696a902810ac97acabfef563994b3"} Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.860319 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013965f69455715e36e67fe42cb55abfaad696a902810ac97acabfef563994b3" Nov 24 21:55:09 crc kubenswrapper[4767]: I1124 21:55:09.860332 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2216-account-create-2hzgb" Nov 24 21:55:11 crc kubenswrapper[4767]: I1124 21:55:11.890433 4767 generic.go:334] "Generic (PLEG): container finished" podID="5783bdd7-a5b2-4ba7-9aa5-505f01383747" containerID="b9c06f9935a37f32def59b3dc1b5eecbc75dc1a47de5f0aeb0da629b1b23a0bf" exitCode=0 Nov 24 21:55:11 crc kubenswrapper[4767]: I1124 21:55:11.890534 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l8gk2" event={"ID":"5783bdd7-a5b2-4ba7-9aa5-505f01383747","Type":"ContainerDied","Data":"b9c06f9935a37f32def59b3dc1b5eecbc75dc1a47de5f0aeb0da629b1b23a0bf"} Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.421574 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l8gk2" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.431582 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-db-sync-config-data\") pod \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.431683 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2pn6\" (UniqueName: \"kubernetes.io/projected/5783bdd7-a5b2-4ba7-9aa5-505f01383747-kube-api-access-x2pn6\") pod \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.431708 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-combined-ca-bundle\") pod \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.431783 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-config-data\") pod \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\" (UID: \"5783bdd7-a5b2-4ba7-9aa5-505f01383747\") " Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.439885 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5783bdd7-a5b2-4ba7-9aa5-505f01383747" (UID: "5783bdd7-a5b2-4ba7-9aa5-505f01383747"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.440875 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5783bdd7-a5b2-4ba7-9aa5-505f01383747-kube-api-access-x2pn6" (OuterVolumeSpecName: "kube-api-access-x2pn6") pod "5783bdd7-a5b2-4ba7-9aa5-505f01383747" (UID: "5783bdd7-a5b2-4ba7-9aa5-505f01383747"). InnerVolumeSpecName "kube-api-access-x2pn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.485368 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5783bdd7-a5b2-4ba7-9aa5-505f01383747" (UID: "5783bdd7-a5b2-4ba7-9aa5-505f01383747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.490019 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-config-data" (OuterVolumeSpecName: "config-data") pod "5783bdd7-a5b2-4ba7-9aa5-505f01383747" (UID: "5783bdd7-a5b2-4ba7-9aa5-505f01383747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.533100 4767 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.533133 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2pn6\" (UniqueName: \"kubernetes.io/projected/5783bdd7-a5b2-4ba7-9aa5-505f01383747-kube-api-access-x2pn6\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.533145 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.533153 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783bdd7-a5b2-4ba7-9aa5-505f01383747-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.931973 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.938968 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.939175 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-677m7\" (UniqueName: \"kubernetes.io/projected/ab05c5db-4946-423d-8123-d76eaa3f716a-kube-api-access-677m7\") pod \"ab05c5db-4946-423d-8123-d76eaa3f716a\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.939219 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab05c5db-4946-423d-8123-d76eaa3f716a-operator-scripts\") pod \"ab05c5db-4946-423d-8123-d76eaa3f716a\" (UID: \"ab05c5db-4946-423d-8123-d76eaa3f716a\") " Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.940551 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab05c5db-4946-423d-8123-d76eaa3f716a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab05c5db-4946-423d-8123-d76eaa3f716a" (UID: "ab05c5db-4946-423d-8123-d76eaa3f716a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.947045 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q7bzm" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.947473 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q7bzm" event={"ID":"3983d70b-b45a-4ee3-a9ef-988fa258635b","Type":"ContainerDied","Data":"8287032d28b0f1c53325db8f3a8b75f6ae926f8bd1d6eeed7e4618417b051550"} Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.947503 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8287032d28b0f1c53325db8f3a8b75f6ae926f8bd1d6eeed7e4618417b051550" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.948203 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab05c5db-4946-423d-8123-d76eaa3f716a-kube-api-access-677m7" (OuterVolumeSpecName: "kube-api-access-677m7") pod "ab05c5db-4946-423d-8123-d76eaa3f716a" (UID: "ab05c5db-4946-423d-8123-d76eaa3f716a"). InnerVolumeSpecName "kube-api-access-677m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.952455 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l8gk2" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.952638 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l8gk2" event={"ID":"5783bdd7-a5b2-4ba7-9aa5-505f01383747","Type":"ContainerDied","Data":"cd483243f880eebdf87bc180b9bda8bad77dffff1b28de0aa4bf4a41aca9a1df"} Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.952670 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd483243f880eebdf87bc180b9bda8bad77dffff1b28de0aa4bf4a41aca9a1df" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.967042 4767 generic.go:334] "Generic (PLEG): container finished" podID="825cb17a-68e9-412d-829f-88001f53782c" containerID="387ec5927bfb3e773b99fea7bd24a3cffb7e069f3f48032c3a149150d7a6bdc1" exitCode=0 Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.967139 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerDied","Data":"387ec5927bfb3e773b99fea7bd24a3cffb7e069f3f48032c3a149150d7a6bdc1"} Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.975639 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fbf-account-create-25zrr" event={"ID":"ab05c5db-4946-423d-8123-d76eaa3f716a","Type":"ContainerDied","Data":"490cfe0add39cdaf01794a0910aa952f1f4c128d90e9e71d47c01dcd444d6090"} Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.975685 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="490cfe0add39cdaf01794a0910aa952f1f4c128d90e9e71d47c01dcd444d6090" Nov 24 21:55:16 crc kubenswrapper[4767]: I1124 21:55:16.975768 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fbf-account-create-25zrr" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.040767 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-677m7\" (UniqueName: \"kubernetes.io/projected/ab05c5db-4946-423d-8123-d76eaa3f716a-kube-api-access-677m7\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.040798 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab05c5db-4946-423d-8123-d76eaa3f716a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.142253 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3983d70b-b45a-4ee3-a9ef-988fa258635b-operator-scripts\") pod \"3983d70b-b45a-4ee3-a9ef-988fa258635b\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.142428 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcgx7\" (UniqueName: \"kubernetes.io/projected/3983d70b-b45a-4ee3-a9ef-988fa258635b-kube-api-access-tcgx7\") pod \"3983d70b-b45a-4ee3-a9ef-988fa258635b\" (UID: \"3983d70b-b45a-4ee3-a9ef-988fa258635b\") " Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.143136 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3983d70b-b45a-4ee3-a9ef-988fa258635b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3983d70b-b45a-4ee3-a9ef-988fa258635b" (UID: "3983d70b-b45a-4ee3-a9ef-988fa258635b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.143529 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3983d70b-b45a-4ee3-a9ef-988fa258635b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.145642 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3983d70b-b45a-4ee3-a9ef-988fa258635b-kube-api-access-tcgx7" (OuterVolumeSpecName: "kube-api-access-tcgx7") pod "3983d70b-b45a-4ee3-a9ef-988fa258635b" (UID: "3983d70b-b45a-4ee3-a9ef-988fa258635b"). InnerVolumeSpecName "kube-api-access-tcgx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.245961 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcgx7\" (UniqueName: \"kubernetes.io/projected/3983d70b-b45a-4ee3-a9ef-988fa258635b-kube-api-access-tcgx7\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.808341 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-666p2"] Nov 24 21:55:17 crc kubenswrapper[4767]: E1124 21:55:17.809070 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3983d70b-b45a-4ee3-a9ef-988fa258635b" containerName="mariadb-database-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809089 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="3983d70b-b45a-4ee3-a9ef-988fa258635b" containerName="mariadb-database-create" Nov 24 21:55:17 crc kubenswrapper[4767]: E1124 21:55:17.809118 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a72f88f-06d7-4a5f-b391-976efcc9ea67" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809125 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a72f88f-06d7-4a5f-b391-976efcc9ea67" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: E1124 21:55:17.809140 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783bdd7-a5b2-4ba7-9aa5-505f01383747" containerName="glance-db-sync" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809148 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783bdd7-a5b2-4ba7-9aa5-505f01383747" containerName="glance-db-sync" Nov 24 21:55:17 crc kubenswrapper[4767]: E1124 21:55:17.809165 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab05c5db-4946-423d-8123-d76eaa3f716a" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809174 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab05c5db-4946-423d-8123-d76eaa3f716a" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: E1124 21:55:17.809185 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f386a17-08d4-4c2d-8727-5171cb4275a5" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809192 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f386a17-08d4-4c2d-8727-5171cb4275a5" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809392 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a72f88f-06d7-4a5f-b391-976efcc9ea67" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809412 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f386a17-08d4-4c2d-8727-5171cb4275a5" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809423 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab05c5db-4946-423d-8123-d76eaa3f716a" containerName="mariadb-account-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809437 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783bdd7-a5b2-4ba7-9aa5-505f01383747" containerName="glance-db-sync" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.809449 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="3983d70b-b45a-4ee3-a9ef-988fa258635b" containerName="mariadb-database-create" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.810506 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.823620 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-666p2"] Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.862901 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-config\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.862957 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.862978 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.863068 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528sc\" (UniqueName: \"kubernetes.io/projected/71ccb354-55d1-4901-a20c-93aaa81bc64f-kube-api-access-528sc\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.863096 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.965877 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.966047 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-config\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.966065 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.966083 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.966146 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-528sc\" (UniqueName: \"kubernetes.io/projected/71ccb354-55d1-4901-a20c-93aaa81bc64f-kube-api-access-528sc\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.966793 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.966899 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.967038 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.967461 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-config\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:17 crc kubenswrapper[4767]: I1124 21:55:17.985849 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-528sc\" (UniqueName: \"kubernetes.io/projected/71ccb354-55d1-4901-a20c-93aaa81bc64f-kube-api-access-528sc\") pod \"dnsmasq-dns-5b946c75cc-666p2\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:18 crc kubenswrapper[4767]: I1124 21:55:18.135358 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:20 crc kubenswrapper[4767]: E1124 21:55:20.059554 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 24 21:55:20 crc kubenswrapper[4767]: E1124 21:55:20.060200 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m88gb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-mj4wm_openstack(134b8eee-26a9-42c6-adec-2ac29ee455ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:55:20 crc kubenswrapper[4767]: E1124 21:55:20.061415 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-mj4wm" podUID="134b8eee-26a9-42c6-adec-2ac29ee455ed" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.331481 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.359814 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.505954 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96621856-cbd1-4e79-a210-59cb502ba291-operator-scripts\") pod \"96621856-cbd1-4e79-a210-59cb502ba291\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.506017 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba84d81f-ea11-4c51-81a1-2edfd90b9144-operator-scripts\") pod \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.506077 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmphp\" (UniqueName: \"kubernetes.io/projected/ba84d81f-ea11-4c51-81a1-2edfd90b9144-kube-api-access-rmphp\") pod \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\" (UID: \"ba84d81f-ea11-4c51-81a1-2edfd90b9144\") " Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.506147 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmks6\" (UniqueName: \"kubernetes.io/projected/96621856-cbd1-4e79-a210-59cb502ba291-kube-api-access-vmks6\") pod \"96621856-cbd1-4e79-a210-59cb502ba291\" (UID: \"96621856-cbd1-4e79-a210-59cb502ba291\") " Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.506789 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba84d81f-ea11-4c51-81a1-2edfd90b9144-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba84d81f-ea11-4c51-81a1-2edfd90b9144" (UID: "ba84d81f-ea11-4c51-81a1-2edfd90b9144"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.506874 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96621856-cbd1-4e79-a210-59cb502ba291-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96621856-cbd1-4e79-a210-59cb502ba291" (UID: "96621856-cbd1-4e79-a210-59cb502ba291"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.510909 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba84d81f-ea11-4c51-81a1-2edfd90b9144-kube-api-access-rmphp" (OuterVolumeSpecName: "kube-api-access-rmphp") pod "ba84d81f-ea11-4c51-81a1-2edfd90b9144" (UID: "ba84d81f-ea11-4c51-81a1-2edfd90b9144"). InnerVolumeSpecName "kube-api-access-rmphp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.510963 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96621856-cbd1-4e79-a210-59cb502ba291-kube-api-access-vmks6" (OuterVolumeSpecName: "kube-api-access-vmks6") pod "96621856-cbd1-4e79-a210-59cb502ba291" (UID: "96621856-cbd1-4e79-a210-59cb502ba291"). InnerVolumeSpecName "kube-api-access-vmks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.595299 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-666p2"] Nov 24 21:55:20 crc kubenswrapper[4767]: W1124 21:55:20.597487 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71ccb354_55d1_4901_a20c_93aaa81bc64f.slice/crio-d6949d9d0fb34a22efbeae23c7583afd8355161d0398bad7e4225eea5c941137 WatchSource:0}: Error finding container d6949d9d0fb34a22efbeae23c7583afd8355161d0398bad7e4225eea5c941137: Status 404 returned error can't find the container with id d6949d9d0fb34a22efbeae23c7583afd8355161d0398bad7e4225eea5c941137 Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.611358 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmphp\" (UniqueName: \"kubernetes.io/projected/ba84d81f-ea11-4c51-81a1-2edfd90b9144-kube-api-access-rmphp\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.611387 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmks6\" (UniqueName: \"kubernetes.io/projected/96621856-cbd1-4e79-a210-59cb502ba291-kube-api-access-vmks6\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.611396 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96621856-cbd1-4e79-a210-59cb502ba291-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:20 crc kubenswrapper[4767]: I1124 21:55:20.611405 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba84d81f-ea11-4c51-81a1-2edfd90b9144-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.057398 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"b54ad0ce44edb49b9a3c9f281d6da4815459c3a6d6a2cb42e72e9b20a94eebde"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.057448 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"18da53c2c3938000ae39ed0ddc4fa5a5718fe2671c83ca2d48f49f858ceae953"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.057464 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"f24c3c3733c3608ba4eb7d39c25aeb46f268f99c4a1cb4f4fd0e27daa4c0afca"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.058773 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hjlg6" event={"ID":"96621856-cbd1-4e79-a210-59cb502ba291","Type":"ContainerDied","Data":"54ff10902607c1676b249187c5201a338aeb7e007efff5bb45823fab9a8da045"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.058797 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ff10902607c1676b249187c5201a338aeb7e007efff5bb45823fab9a8da045" Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.058806 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjlg6" Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.060784 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerStarted","Data":"b93452ecdbfd84b4d4056576486ff2145ebda4de665946cda363b626a451c53a"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.062214 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7wzxw" event={"ID":"d803aeed-f0af-4587-b58d-1e7e8273a21d","Type":"ContainerStarted","Data":"a1a803232349f2dc08bb28c94ad9c1d02cf71632fc8498e9d707290cd72cb2f2"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.064081 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6dw5c" event={"ID":"ba84d81f-ea11-4c51-81a1-2edfd90b9144","Type":"ContainerDied","Data":"0c497335b1190d5c2521704c110b8e24305ad7fcd58121efdd2b30d28433990e"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.064112 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c497335b1190d5c2521704c110b8e24305ad7fcd58121efdd2b30d28433990e" Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.064154 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6dw5c" Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.076690 4767 generic.go:334] "Generic (PLEG): container finished" podID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerID="69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7" exitCode=0 Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.077686 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" event={"ID":"71ccb354-55d1-4901-a20c-93aaa81bc64f","Type":"ContainerDied","Data":"69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7"} Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.077718 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" event={"ID":"71ccb354-55d1-4901-a20c-93aaa81bc64f","Type":"ContainerStarted","Data":"d6949d9d0fb34a22efbeae23c7583afd8355161d0398bad7e4225eea5c941137"} Nov 24 21:55:21 crc kubenswrapper[4767]: E1124 21:55:21.082762 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-mj4wm" podUID="134b8eee-26a9-42c6-adec-2ac29ee455ed" Nov 24 21:55:21 crc kubenswrapper[4767]: I1124 21:55:21.095931 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-7wzxw" podStartSLOduration=4.48703173 podStartE2EDuration="20.095916808s" podCreationTimestamp="2025-11-24 21:55:01 +0000 UTC" firstStartedPulling="2025-11-24 21:55:04.530262966 +0000 UTC m=+987.447246338" lastFinishedPulling="2025-11-24 21:55:20.139148044 +0000 UTC m=+1003.056131416" observedRunningTime="2025-11-24 21:55:21.092709467 +0000 UTC m=+1004.009692839" watchObservedRunningTime="2025-11-24 21:55:21.095916808 +0000 UTC m=+1004.012900180" Nov 24 21:55:22 crc kubenswrapper[4767]: I1124 21:55:22.086760 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" event={"ID":"71ccb354-55d1-4901-a20c-93aaa81bc64f","Type":"ContainerStarted","Data":"689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c"} Nov 24 21:55:22 crc kubenswrapper[4767]: I1124 21:55:22.087946 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:22 crc kubenswrapper[4767]: I1124 21:55:22.091870 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"9898b36fd4080b2eade3b86c2b7324ec282aba9c699ca6c0c2a4544627cbc4b8"} Nov 24 21:55:22 crc kubenswrapper[4767]: I1124 21:55:22.116557 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" podStartSLOduration=5.116535146 podStartE2EDuration="5.116535146s" podCreationTimestamp="2025-11-24 21:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:22.107626533 +0000 UTC m=+1005.024609935" watchObservedRunningTime="2025-11-24 21:55:22.116535146 +0000 UTC m=+1005.033518538" Nov 24 21:55:23 crc kubenswrapper[4767]: I1124 21:55:23.100411 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerStarted","Data":"a219ea07fcd3fd0e5a8b0567916bd7ae58e89018793fff91b39baa82fff1e6b0"} Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.122364 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"32a64b3eda4037a68101fcc7e3ac9274c15fe482e2058b7106433358be6e44be"} Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.123066 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"087374949ecf1f7b4758c0fbc33e9058c188024ce8ff100096d752ee37a8a78d"} Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.123084 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"006d3a2d06efe3a5366948f75339f62d7ec3dec0d94d4756fa05f5349f4f2d9e"} Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.123098 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"6630c37eaa35ef1a484331ed2b80c2ab8a57eeb5f2b9b6965af0dc0c57896b35"} Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.126661 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerStarted","Data":"5350731ca94ea60bc9fa4513e771a8c6cf594106b7d5a5fc485d8d0244564dc6"} Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.129066 4767 generic.go:334] "Generic (PLEG): container finished" podID="d803aeed-f0af-4587-b58d-1e7e8273a21d" containerID="a1a803232349f2dc08bb28c94ad9c1d02cf71632fc8498e9d707290cd72cb2f2" exitCode=0 Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.129162 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7wzxw" event={"ID":"d803aeed-f0af-4587-b58d-1e7e8273a21d","Type":"ContainerDied","Data":"a1a803232349f2dc08bb28c94ad9c1d02cf71632fc8498e9d707290cd72cb2f2"} Nov 24 21:55:24 crc kubenswrapper[4767]: I1124 21:55:24.168967 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.168944799 podStartE2EDuration="20.168944799s" podCreationTimestamp="2025-11-24 21:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:24.160949072 +0000 UTC m=+1007.077932474" watchObservedRunningTime="2025-11-24 21:55:24.168944799 +0000 UTC m=+1007.085928191" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.241684 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.551343 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.705366 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjxc7\" (UniqueName: \"kubernetes.io/projected/d803aeed-f0af-4587-b58d-1e7e8273a21d-kube-api-access-fjxc7\") pod \"d803aeed-f0af-4587-b58d-1e7e8273a21d\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.705533 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-config-data\") pod \"d803aeed-f0af-4587-b58d-1e7e8273a21d\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.705598 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-combined-ca-bundle\") pod \"d803aeed-f0af-4587-b58d-1e7e8273a21d\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.705627 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-db-sync-config-data\") pod \"d803aeed-f0af-4587-b58d-1e7e8273a21d\" (UID: \"d803aeed-f0af-4587-b58d-1e7e8273a21d\") " Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.711231 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d803aeed-f0af-4587-b58d-1e7e8273a21d-kube-api-access-fjxc7" (OuterVolumeSpecName: "kube-api-access-fjxc7") pod "d803aeed-f0af-4587-b58d-1e7e8273a21d" (UID: "d803aeed-f0af-4587-b58d-1e7e8273a21d"). InnerVolumeSpecName "kube-api-access-fjxc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.711368 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d803aeed-f0af-4587-b58d-1e7e8273a21d" (UID: "d803aeed-f0af-4587-b58d-1e7e8273a21d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.733016 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d803aeed-f0af-4587-b58d-1e7e8273a21d" (UID: "d803aeed-f0af-4587-b58d-1e7e8273a21d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.771288 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-config-data" (OuterVolumeSpecName: "config-data") pod "d803aeed-f0af-4587-b58d-1e7e8273a21d" (UID: "d803aeed-f0af-4587-b58d-1e7e8273a21d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.807664 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjxc7\" (UniqueName: \"kubernetes.io/projected/d803aeed-f0af-4587-b58d-1e7e8273a21d-kube-api-access-fjxc7\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.807918 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.807931 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:25 crc kubenswrapper[4767]: I1124 21:55:25.807944 4767 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d803aeed-f0af-4587-b58d-1e7e8273a21d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.155972 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7wzxw" event={"ID":"d803aeed-f0af-4587-b58d-1e7e8273a21d","Type":"ContainerDied","Data":"8d4af741b01f228751cb73ee68932548c4c0e144396e8830daf5bae0614d1811"} Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.156006 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d4af741b01f228751cb73ee68932548c4c0e144396e8830daf5bae0614d1811" Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.156062 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7wzxw" Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.174206 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"cad0086989350e9d56bd6788bb7ce40d19d5e5a521039e1b82941794bf581708"} Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.174241 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"623848aeb0b6b0fbb351e9ffa83ddb6128bd8aefe28c1ac2ca3e8e23316f3808"} Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.174251 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"f394f8bd95bc2a3d871635e031cbf8cd4d9178a188d8642b731179d5f2b83a18"} Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.174259 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"d372b4c78db92ec8211769c5a8b9e76c83e40521667d1abea5795a92c2593125"} Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.174304 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"e6ec7c16642bdb45a934ba1cccb59b8a5af4773d754b61b8a33e8a173f5b9120"} Nov 24 21:55:26 crc kubenswrapper[4767]: I1124 21:55:26.174317 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"a91c686f6698d4612549e7637cd1f792dfbb698679d8eb62a098b947cb5d2d80"} Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.190922 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db319bac-943e-4baa-afb0-2089513c8935","Type":"ContainerStarted","Data":"e4c38828c9e396e1ff65a3965a2f196f2dcb763f3362409bdbeefdc13c573560"} Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.232578 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.699942498 podStartE2EDuration="1m2.232553691s" podCreationTimestamp="2025-11-24 21:54:25 +0000 UTC" firstStartedPulling="2025-11-24 21:55:04.591408862 +0000 UTC m=+987.508392234" lastFinishedPulling="2025-11-24 21:55:25.124020035 +0000 UTC m=+1008.041003427" observedRunningTime="2025-11-24 21:55:27.227110846 +0000 UTC m=+1010.144094248" watchObservedRunningTime="2025-11-24 21:55:27.232553691 +0000 UTC m=+1010.149537073" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.503790 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-666p2"] Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.504382 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" podUID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerName="dnsmasq-dns" containerID="cri-o://689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c" gracePeriod=10 Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.505712 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.538631 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-fc7bm"] Nov 24 21:55:27 crc kubenswrapper[4767]: E1124 21:55:27.538968 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96621856-cbd1-4e79-a210-59cb502ba291" containerName="mariadb-database-create" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.538985 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96621856-cbd1-4e79-a210-59cb502ba291" containerName="mariadb-database-create" Nov 24 21:55:27 crc kubenswrapper[4767]: E1124 21:55:27.539005 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d803aeed-f0af-4587-b58d-1e7e8273a21d" containerName="watcher-db-sync" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.539012 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d803aeed-f0af-4587-b58d-1e7e8273a21d" containerName="watcher-db-sync" Nov 24 21:55:27 crc kubenswrapper[4767]: E1124 21:55:27.539023 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba84d81f-ea11-4c51-81a1-2edfd90b9144" containerName="mariadb-database-create" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.539029 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba84d81f-ea11-4c51-81a1-2edfd90b9144" containerName="mariadb-database-create" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.539208 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba84d81f-ea11-4c51-81a1-2edfd90b9144" containerName="mariadb-database-create" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.539221 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96621856-cbd1-4e79-a210-59cb502ba291" containerName="mariadb-database-create" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.539234 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d803aeed-f0af-4587-b58d-1e7e8273a21d" containerName="watcher-db-sync" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.540085 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.541887 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.559451 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-fc7bm"] Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.633965 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.634047 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-config\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.634081 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.634358 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.634435 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhqxx\" (UniqueName: \"kubernetes.io/projected/f97d9980-2ced-4225-b125-cfffc3f605c9-kube-api-access-rhqxx\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.634462 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.735698 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhqxx\" (UniqueName: \"kubernetes.io/projected/f97d9980-2ced-4225-b125-cfffc3f605c9-kube-api-access-rhqxx\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.735739 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.735786 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.735836 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-config\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.735861 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.735922 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.736618 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.736632 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.736963 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-config\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.737137 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.737247 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.757545 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhqxx\" (UniqueName: \"kubernetes.io/projected/f97d9980-2ced-4225-b125-cfffc3f605c9-kube-api-access-rhqxx\") pod \"dnsmasq-dns-74f6bcbc87-fc7bm\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.911989 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:27 crc kubenswrapper[4767]: I1124 21:55:27.927700 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.039115 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-nb\") pod \"71ccb354-55d1-4901-a20c-93aaa81bc64f\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.039191 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-sb\") pod \"71ccb354-55d1-4901-a20c-93aaa81bc64f\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.039344 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-config\") pod \"71ccb354-55d1-4901-a20c-93aaa81bc64f\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.039418 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-dns-svc\") pod \"71ccb354-55d1-4901-a20c-93aaa81bc64f\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.039470 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528sc\" (UniqueName: \"kubernetes.io/projected/71ccb354-55d1-4901-a20c-93aaa81bc64f-kube-api-access-528sc\") pod \"71ccb354-55d1-4901-a20c-93aaa81bc64f\" (UID: \"71ccb354-55d1-4901-a20c-93aaa81bc64f\") " Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.046736 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ccb354-55d1-4901-a20c-93aaa81bc64f-kube-api-access-528sc" (OuterVolumeSpecName: "kube-api-access-528sc") pod "71ccb354-55d1-4901-a20c-93aaa81bc64f" (UID: "71ccb354-55d1-4901-a20c-93aaa81bc64f"). InnerVolumeSpecName "kube-api-access-528sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.103317 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-config" (OuterVolumeSpecName: "config") pod "71ccb354-55d1-4901-a20c-93aaa81bc64f" (UID: "71ccb354-55d1-4901-a20c-93aaa81bc64f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.103800 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71ccb354-55d1-4901-a20c-93aaa81bc64f" (UID: "71ccb354-55d1-4901-a20c-93aaa81bc64f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.105998 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71ccb354-55d1-4901-a20c-93aaa81bc64f" (UID: "71ccb354-55d1-4901-a20c-93aaa81bc64f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.108112 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71ccb354-55d1-4901-a20c-93aaa81bc64f" (UID: "71ccb354-55d1-4901-a20c-93aaa81bc64f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.141220 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.141252 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.141283 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.141292 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71ccb354-55d1-4901-a20c-93aaa81bc64f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.141302 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-528sc\" (UniqueName: \"kubernetes.io/projected/71ccb354-55d1-4901-a20c-93aaa81bc64f-kube-api-access-528sc\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.201676 4767 generic.go:334] "Generic (PLEG): container finished" podID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerID="689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c" exitCode=0 Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.201839 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.201854 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" event={"ID":"71ccb354-55d1-4901-a20c-93aaa81bc64f","Type":"ContainerDied","Data":"689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c"} Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.201924 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-666p2" event={"ID":"71ccb354-55d1-4901-a20c-93aaa81bc64f","Type":"ContainerDied","Data":"d6949d9d0fb34a22efbeae23c7583afd8355161d0398bad7e4225eea5c941137"} Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.201949 4767 scope.go:117] "RemoveContainer" containerID="689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.233489 4767 scope.go:117] "RemoveContainer" containerID="69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.243888 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-666p2"] Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.254218 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-666p2"] Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.266399 4767 scope.go:117] "RemoveContainer" containerID="689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c" Nov 24 21:55:28 crc kubenswrapper[4767]: E1124 21:55:28.266820 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c\": container with ID starting with 689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c not found: ID does not exist" containerID="689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.266872 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c"} err="failed to get container status \"689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c\": rpc error: code = NotFound desc = could not find container \"689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c\": container with ID starting with 689deae6a41e78fcdc12967b5a5b67abc2bb6c90915fa7d3523b5aa35b54f56c not found: ID does not exist" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.266892 4767 scope.go:117] "RemoveContainer" containerID="69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7" Nov 24 21:55:28 crc kubenswrapper[4767]: E1124 21:55:28.267349 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7\": container with ID starting with 69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7 not found: ID does not exist" containerID="69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.267371 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7"} err="failed to get container status \"69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7\": rpc error: code = NotFound desc = could not find container \"69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7\": container with ID starting with 69f241913d04273025b928fe1926b11ede4d021e9ea6c87efd4d4adca40ffee7 not found: ID does not exist" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.334796 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ccb354-55d1-4901-a20c-93aaa81bc64f" path="/var/lib/kubelet/pods/71ccb354-55d1-4901-a20c-93aaa81bc64f/volumes" Nov 24 21:55:28 crc kubenswrapper[4767]: I1124 21:55:28.392259 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-fc7bm"] Nov 24 21:55:28 crc kubenswrapper[4767]: W1124 21:55:28.394734 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf97d9980_2ced_4225_b125_cfffc3f605c9.slice/crio-a487a1c893786e0a8d4b998ca505be14ccf30b359dd8112fdd9db7313db10f13 WatchSource:0}: Error finding container a487a1c893786e0a8d4b998ca505be14ccf30b359dd8112fdd9db7313db10f13: Status 404 returned error can't find the container with id a487a1c893786e0a8d4b998ca505be14ccf30b359dd8112fdd9db7313db10f13 Nov 24 21:55:29 crc kubenswrapper[4767]: I1124 21:55:29.212376 4767 generic.go:334] "Generic (PLEG): container finished" podID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerID="a8489feca93b2f16c904d5b239c8a9ff76dac7dabe88724959ce9843b095587a" exitCode=0 Nov 24 21:55:29 crc kubenswrapper[4767]: I1124 21:55:29.212478 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" event={"ID":"f97d9980-2ced-4225-b125-cfffc3f605c9","Type":"ContainerDied","Data":"a8489feca93b2f16c904d5b239c8a9ff76dac7dabe88724959ce9843b095587a"} Nov 24 21:55:29 crc kubenswrapper[4767]: I1124 21:55:29.212760 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" event={"ID":"f97d9980-2ced-4225-b125-cfffc3f605c9","Type":"ContainerStarted","Data":"a487a1c893786e0a8d4b998ca505be14ccf30b359dd8112fdd9db7313db10f13"} Nov 24 21:55:30 crc kubenswrapper[4767]: I1124 21:55:30.226575 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" event={"ID":"f97d9980-2ced-4225-b125-cfffc3f605c9","Type":"ContainerStarted","Data":"fc889eea10c52d65d24c1fe8834229472e49a611c082efc1d6af7208f3b05088"} Nov 24 21:55:30 crc kubenswrapper[4767]: I1124 21:55:30.226919 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:30 crc kubenswrapper[4767]: I1124 21:55:30.257769 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" podStartSLOduration=3.257750692 podStartE2EDuration="3.257750692s" podCreationTimestamp="2025-11-24 21:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:30.252374859 +0000 UTC m=+1013.169358241" watchObservedRunningTime="2025-11-24 21:55:30.257750692 +0000 UTC m=+1013.174734064" Nov 24 21:55:35 crc kubenswrapper[4767]: I1124 21:55:35.242588 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:35 crc kubenswrapper[4767]: I1124 21:55:35.262506 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:35 crc kubenswrapper[4767]: I1124 21:55:35.285476 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mj4wm" event={"ID":"134b8eee-26a9-42c6-adec-2ac29ee455ed","Type":"ContainerStarted","Data":"8c0ba9ef8e119586eed17fcd187e6e421c462d8180cb0db5134b19f1f6af7f3b"} Nov 24 21:55:35 crc kubenswrapper[4767]: I1124 21:55:35.291476 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:55:35 crc kubenswrapper[4767]: I1124 21:55:35.382972 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-mj4wm" podStartSLOduration=3.970985627 podStartE2EDuration="34.382947156s" podCreationTimestamp="2025-11-24 21:55:01 +0000 UTC" firstStartedPulling="2025-11-24 21:55:04.343474932 +0000 UTC m=+987.260458304" lastFinishedPulling="2025-11-24 21:55:34.755436421 +0000 UTC m=+1017.672419833" observedRunningTime="2025-11-24 21:55:35.376339508 +0000 UTC m=+1018.293322880" watchObservedRunningTime="2025-11-24 21:55:35.382947156 +0000 UTC m=+1018.299930528" Nov 24 21:55:37 crc kubenswrapper[4767]: I1124 21:55:37.913497 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:37 crc kubenswrapper[4767]: I1124 21:55:37.988666 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dcpx8"] Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.312703 4767 generic.go:334] "Generic (PLEG): container finished" podID="134b8eee-26a9-42c6-adec-2ac29ee455ed" containerID="8c0ba9ef8e119586eed17fcd187e6e421c462d8180cb0db5134b19f1f6af7f3b" exitCode=0 Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.318167 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-dcpx8" podUID="9f577f96-f5cf-47b3-aa5c-179164418612" containerName="dnsmasq-dns" containerID="cri-o://33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28" gracePeriod=10 Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.324073 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mj4wm" event={"ID":"134b8eee-26a9-42c6-adec-2ac29ee455ed","Type":"ContainerDied","Data":"8c0ba9ef8e119586eed17fcd187e6e421c462d8180cb0db5134b19f1f6af7f3b"} Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.729069 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.827247 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-nb\") pod \"9f577f96-f5cf-47b3-aa5c-179164418612\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.827329 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-sb\") pod \"9f577f96-f5cf-47b3-aa5c-179164418612\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.827447 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w8b4\" (UniqueName: \"kubernetes.io/projected/9f577f96-f5cf-47b3-aa5c-179164418612-kube-api-access-5w8b4\") pod \"9f577f96-f5cf-47b3-aa5c-179164418612\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.827497 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-dns-svc\") pod \"9f577f96-f5cf-47b3-aa5c-179164418612\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.827587 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-config\") pod \"9f577f96-f5cf-47b3-aa5c-179164418612\" (UID: \"9f577f96-f5cf-47b3-aa5c-179164418612\") " Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.834248 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f577f96-f5cf-47b3-aa5c-179164418612-kube-api-access-5w8b4" (OuterVolumeSpecName: "kube-api-access-5w8b4") pod "9f577f96-f5cf-47b3-aa5c-179164418612" (UID: "9f577f96-f5cf-47b3-aa5c-179164418612"). InnerVolumeSpecName "kube-api-access-5w8b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.880485 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f577f96-f5cf-47b3-aa5c-179164418612" (UID: "9f577f96-f5cf-47b3-aa5c-179164418612"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.881021 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-config" (OuterVolumeSpecName: "config") pod "9f577f96-f5cf-47b3-aa5c-179164418612" (UID: "9f577f96-f5cf-47b3-aa5c-179164418612"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.891362 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f577f96-f5cf-47b3-aa5c-179164418612" (UID: "9f577f96-f5cf-47b3-aa5c-179164418612"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.893573 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f577f96-f5cf-47b3-aa5c-179164418612" (UID: "9f577f96-f5cf-47b3-aa5c-179164418612"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.929443 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.929474 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.929485 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w8b4\" (UniqueName: \"kubernetes.io/projected/9f577f96-f5cf-47b3-aa5c-179164418612-kube-api-access-5w8b4\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.929497 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:38 crc kubenswrapper[4767]: I1124 21:55:38.929506 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f577f96-f5cf-47b3-aa5c-179164418612-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.327867 4767 generic.go:334] "Generic (PLEG): container finished" podID="9f577f96-f5cf-47b3-aa5c-179164418612" containerID="33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28" exitCode=0 Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.327973 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-dcpx8" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.328003 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dcpx8" event={"ID":"9f577f96-f5cf-47b3-aa5c-179164418612","Type":"ContainerDied","Data":"33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28"} Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.328086 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-dcpx8" event={"ID":"9f577f96-f5cf-47b3-aa5c-179164418612","Type":"ContainerDied","Data":"b200bf7a28bd48922767d649aa1cc3f9be8edfd3d554dda06b720ec961b9b7bc"} Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.328133 4767 scope.go:117] "RemoveContainer" containerID="33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.361068 4767 scope.go:117] "RemoveContainer" containerID="233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.382059 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dcpx8"] Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.392915 4767 scope.go:117] "RemoveContainer" containerID="33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28" Nov 24 21:55:39 crc kubenswrapper[4767]: E1124 21:55:39.399830 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28\": container with ID starting with 33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28 not found: ID does not exist" containerID="33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.399888 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28"} err="failed to get container status \"33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28\": rpc error: code = NotFound desc = could not find container \"33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28\": container with ID starting with 33d575419b986c14b7b95a2f174140f253cb64a92fff130725462798779e3c28 not found: ID does not exist" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.399922 4767 scope.go:117] "RemoveContainer" containerID="233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39" Nov 24 21:55:39 crc kubenswrapper[4767]: E1124 21:55:39.401772 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39\": container with ID starting with 233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39 not found: ID does not exist" containerID="233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.401813 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39"} err="failed to get container status \"233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39\": rpc error: code = NotFound desc = could not find container \"233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39\": container with ID starting with 233164f598f3fe98b7821032f467c7c01ea26f00a9eca96b25f479cf76016a39 not found: ID does not exist" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.410178 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-dcpx8"] Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.723820 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.843755 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m88gb\" (UniqueName: \"kubernetes.io/projected/134b8eee-26a9-42c6-adec-2ac29ee455ed-kube-api-access-m88gb\") pod \"134b8eee-26a9-42c6-adec-2ac29ee455ed\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.843825 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-config-data\") pod \"134b8eee-26a9-42c6-adec-2ac29ee455ed\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.843876 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-combined-ca-bundle\") pod \"134b8eee-26a9-42c6-adec-2ac29ee455ed\" (UID: \"134b8eee-26a9-42c6-adec-2ac29ee455ed\") " Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.852544 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134b8eee-26a9-42c6-adec-2ac29ee455ed-kube-api-access-m88gb" (OuterVolumeSpecName: "kube-api-access-m88gb") pod "134b8eee-26a9-42c6-adec-2ac29ee455ed" (UID: "134b8eee-26a9-42c6-adec-2ac29ee455ed"). InnerVolumeSpecName "kube-api-access-m88gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.895510 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "134b8eee-26a9-42c6-adec-2ac29ee455ed" (UID: "134b8eee-26a9-42c6-adec-2ac29ee455ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.902732 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-config-data" (OuterVolumeSpecName: "config-data") pod "134b8eee-26a9-42c6-adec-2ac29ee455ed" (UID: "134b8eee-26a9-42c6-adec-2ac29ee455ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.945745 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m88gb\" (UniqueName: \"kubernetes.io/projected/134b8eee-26a9-42c6-adec-2ac29ee455ed-kube-api-access-m88gb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.945777 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:39 crc kubenswrapper[4767]: I1124 21:55:39.945788 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134b8eee-26a9-42c6-adec-2ac29ee455ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.325471 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f577f96-f5cf-47b3-aa5c-179164418612" path="/var/lib/kubelet/pods/9f577f96-f5cf-47b3-aa5c-179164418612/volumes" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.343484 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mj4wm" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.343477 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mj4wm" event={"ID":"134b8eee-26a9-42c6-adec-2ac29ee455ed","Type":"ContainerDied","Data":"2997a8767797a9b30ca21d4764dde6f227c1425e6d0a7ab235593845c5b841a3"} Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.343732 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2997a8767797a9b30ca21d4764dde6f227c1425e6d0a7ab235593845c5b841a3" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.611878 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xt8r4"] Nov 24 21:55:40 crc kubenswrapper[4767]: E1124 21:55:40.612635 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f577f96-f5cf-47b3-aa5c-179164418612" containerName="init" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.612659 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f577f96-f5cf-47b3-aa5c-179164418612" containerName="init" Nov 24 21:55:40 crc kubenswrapper[4767]: E1124 21:55:40.612690 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerName="dnsmasq-dns" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.612701 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerName="dnsmasq-dns" Nov 24 21:55:40 crc kubenswrapper[4767]: E1124 21:55:40.612734 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134b8eee-26a9-42c6-adec-2ac29ee455ed" containerName="keystone-db-sync" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.612749 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="134b8eee-26a9-42c6-adec-2ac29ee455ed" containerName="keystone-db-sync" Nov 24 21:55:40 crc kubenswrapper[4767]: E1124 21:55:40.612767 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerName="init" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.612775 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerName="init" Nov 24 21:55:40 crc kubenswrapper[4767]: E1124 21:55:40.612791 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f577f96-f5cf-47b3-aa5c-179164418612" containerName="dnsmasq-dns" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.612799 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f577f96-f5cf-47b3-aa5c-179164418612" containerName="dnsmasq-dns" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.613030 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ccb354-55d1-4901-a20c-93aaa81bc64f" containerName="dnsmasq-dns" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.613051 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="134b8eee-26a9-42c6-adec-2ac29ee455ed" containerName="keystone-db-sync" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.613069 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f577f96-f5cf-47b3-aa5c-179164418612" containerName="dnsmasq-dns" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.615386 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.634559 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xt8r4"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.643537 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-sz58w"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.644796 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.647139 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.647584 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbsgd" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.652091 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.652335 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.652490 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.658108 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.658460 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.659713 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-config\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.659908 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.660056 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpl96\" (UniqueName: \"kubernetes.io/projected/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-kube-api-access-qpl96\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.660175 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.693560 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sz58w"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761804 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-config-data\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761854 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-combined-ca-bundle\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761886 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761911 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-scripts\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761935 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761956 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdv6\" (UniqueName: \"kubernetes.io/projected/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-kube-api-access-qvdv6\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761973 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-config\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.761992 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-fernet-keys\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.762032 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.762065 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpl96\" (UniqueName: \"kubernetes.io/projected/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-kube-api-access-qpl96\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.762083 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.762172 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-credential-keys\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.763169 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.763715 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-config\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.763715 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.768012 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.769601 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.786324 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.787552 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.802529 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-mrbvr" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.802701 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.810994 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.824405 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.825833 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.834692 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.838076 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpl96\" (UniqueName: \"kubernetes.io/projected/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-kube-api-access-qpl96\") pod \"dnsmasq-dns-847c4cc679-xt8r4\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871312 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd362fd6-aa93-46af-b11d-042876cf1554-logs\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871372 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwfj9\" (UniqueName: \"kubernetes.io/projected/cd362fd6-aa93-46af-b11d-042876cf1554-kube-api-access-vwfj9\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871432 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-config-data\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871459 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871494 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-config-data\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871520 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871543 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7073226-245a-41db-80c3-f30102363ae1-logs\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871597 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-credential-keys\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871622 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-config-data\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871655 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-combined-ca-bundle\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871687 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871718 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-scripts\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871759 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdv6\" (UniqueName: \"kubernetes.io/projected/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-kube-api-access-qvdv6\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871787 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-fernet-keys\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.871818 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttlv\" (UniqueName: \"kubernetes.io/projected/f7073226-245a-41db-80c3-f30102363ae1-kube-api-access-fttlv\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.876058 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.883317 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tzcqj"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.885425 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.890009 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-combined-ca-bundle\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.892035 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-config-data\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.910002 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-credential-keys\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.910669 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7kxpd" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.910874 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.910951 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.911034 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.912339 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.914767 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-fernet-keys\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.920830 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-scripts\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.920960 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.941044 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tzcqj"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.943773 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.945195 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdv6\" (UniqueName: \"kubernetes.io/projected/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-kube-api-access-qvdv6\") pod \"keystone-bootstrap-sz58w\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.961105 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.969140 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.984083 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77896db6b9-8mlpx"] Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.985634 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.986920 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.986984 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7073226-245a-41db-80c3-f30102363ae1-logs\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987018 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tg7\" (UniqueName: \"kubernetes.io/projected/9af43afe-d337-48a3-a1ec-568b83802765-kube-api-access-26tg7\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987049 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987110 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-config-data\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987144 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987176 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43afe-d337-48a3-a1ec-568b83802765-logs\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987195 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29fsh\" (UniqueName: \"kubernetes.io/projected/128eda36-f009-47c2-8939-73ec23da0d4c-kube-api-access-29fsh\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987219 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-scripts\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987234 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttlv\" (UniqueName: \"kubernetes.io/projected/f7073226-245a-41db-80c3-f30102363ae1-kube-api-access-fttlv\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.987253 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-config-data\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.989893 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.990454 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7073226-245a-41db-80c3-f30102363ae1-logs\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.995367 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.995587 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-hksxg" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.997628 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-combined-ca-bundle\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.997694 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd362fd6-aa93-46af-b11d-042876cf1554-logs\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.997726 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwfj9\" (UniqueName: \"kubernetes.io/projected/cd362fd6-aa93-46af-b11d-042876cf1554-kube-api-access-vwfj9\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.997796 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/128eda36-f009-47c2-8939-73ec23da0d4c-etc-machine-id\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.997818 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-db-sync-config-data\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.998047 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-config-data\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.998075 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.998108 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-config-data\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.998127 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:40 crc kubenswrapper[4767]: I1124 21:55:40.999831 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd362fd6-aa93-46af-b11d-042876cf1554-logs\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.014253 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.014919 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.016047 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.028634 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-config-data\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.033899 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-config-data\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.041474 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77896db6b9-8mlpx"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.041688 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.047996 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttlv\" (UniqueName: \"kubernetes.io/projected/f7073226-245a-41db-80c3-f30102363ae1-kube-api-access-fttlv\") pod \"watcher-applier-0\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " pod="openstack/watcher-applier-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.054154 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwfj9\" (UniqueName: \"kubernetes.io/projected/cd362fd6-aa93-46af-b11d-042876cf1554-kube-api-access-vwfj9\") pod \"watcher-decision-engine-0\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.064306 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lc8sg"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.065438 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.072257 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.072525 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.072584 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fz5t8" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.080994 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lc8sg"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099478 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099524 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-config-data\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099550 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26tg7\" (UniqueName: \"kubernetes.io/projected/9af43afe-d337-48a3-a1ec-568b83802765-kube-api-access-26tg7\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099574 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099597 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-horizon-secret-key\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099616 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-scripts\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099631 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-config-data\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.099648 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-logs\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.105560 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108051 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43afe-d337-48a3-a1ec-568b83802765-logs\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108094 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29fsh\" (UniqueName: \"kubernetes.io/projected/128eda36-f009-47c2-8939-73ec23da0d4c-kube-api-access-29fsh\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108147 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppmh\" (UniqueName: \"kubernetes.io/projected/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-kube-api-access-zppmh\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108184 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-scripts\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108217 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-config-data\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108242 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-combined-ca-bundle\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108625 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.108890 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43afe-d337-48a3-a1ec-568b83802765-logs\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.110207 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-config-data\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.113442 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.119030 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.126446 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/128eda36-f009-47c2-8939-73ec23da0d4c-etc-machine-id\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.126505 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-db-sync-config-data\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.131285 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/128eda36-f009-47c2-8939-73ec23da0d4c-etc-machine-id\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.177385 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29fsh\" (UniqueName: \"kubernetes.io/projected/128eda36-f009-47c2-8939-73ec23da0d4c-kube-api-access-29fsh\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.177766 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-combined-ca-bundle\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.178252 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-scripts\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.178999 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-config-data\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.190237 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.190697 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tg7\" (UniqueName: \"kubernetes.io/projected/9af43afe-d337-48a3-a1ec-568b83802765-kube-api-access-26tg7\") pod \"watcher-api-0\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.196636 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-db-sync-config-data\") pod \"cinder-db-sync-tzcqj\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.209975 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.223533 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.227617 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.228992 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-config-data\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.229054 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xmpz\" (UniqueName: \"kubernetes.io/projected/92996c14-829b-4668-b74f-42e672f1b9b3-kube-api-access-9xmpz\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.229082 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-combined-ca-bundle\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.229102 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-horizon-secret-key\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.229120 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-scripts\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.229143 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-logs\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.229190 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-config\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.229241 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppmh\" (UniqueName: \"kubernetes.io/projected/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-kube-api-access-zppmh\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.247433 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-logs\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.255829 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-horizon-secret-key\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.256625 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.265470 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-scripts\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.283746 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-config-data\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.287830 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.298889 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppmh\" (UniqueName: \"kubernetes.io/projected/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-kube-api-access-zppmh\") pod \"horizon-77896db6b9-8mlpx\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.308754 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xt8r4"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331397 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331441 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-scripts\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331552 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xmpz\" (UniqueName: \"kubernetes.io/projected/92996c14-829b-4668-b74f-42e672f1b9b3-kube-api-access-9xmpz\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331571 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331591 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-config-data\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331614 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-combined-ca-bundle\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331686 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-config\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331734 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331780 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.331802 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxc9\" (UniqueName: \"kubernetes.io/projected/d7a9ba0d-f67a-4887-82d8-3135cf56098a-kube-api-access-lkxc9\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.340572 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-combined-ca-bundle\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.353570 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bfbc56cc-98l48"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.361191 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.355861 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-config\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.376613 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xmpz\" (UniqueName: \"kubernetes.io/projected/92996c14-829b-4668-b74f-42e672f1b9b3-kube-api-access-9xmpz\") pod \"neutron-db-sync-lc8sg\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.378785 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bfbc56cc-98l48"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.387632 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2cfjb"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.416070 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-r9fp5"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.418837 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.429688 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-r9fp5"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.429796 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.431781 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.432158 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hk9tc" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433170 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkxc9\" (UniqueName: \"kubernetes.io/projected/d7a9ba0d-f67a-4887-82d8-3135cf56098a-kube-api-access-lkxc9\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433347 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-scripts\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433446 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-config-data\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433529 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433611 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-scripts\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433742 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433813 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-config-data\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.433901 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrz4j\" (UniqueName: \"kubernetes.io/projected/c1f11c62-caea-4b02-9a66-6c385a3b93c0-kube-api-access-jrz4j\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.434006 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.434105 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1f11c62-caea-4b02-9a66-6c385a3b93c0-horizon-secret-key\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.434188 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.434260 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f11c62-caea-4b02-9a66-6c385a3b93c0-logs\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.435666 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.442661 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.442666 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.446362 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hd5nf"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.447528 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.461311 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.461725 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.467820 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jmhbv" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.471407 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2cfjb"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.478855 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hd5nf"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.491342 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-config-data\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.505324 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.506773 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.512205 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkxc9\" (UniqueName: \"kubernetes.io/projected/d7a9ba0d-f67a-4887-82d8-3135cf56098a-kube-api-access-lkxc9\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.515893 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.516094 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.516200 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2c6jp" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.517259 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.526979 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-scripts\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.530949 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.534697 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.537577 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f11c62-caea-4b02-9a66-6c385a3b93c0-logs\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.537949 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzm85\" (UniqueName: \"kubernetes.io/projected/83eba727-cd44-4013-8ce3-5672f4f7f595-kube-api-access-nzm85\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.538078 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-scripts\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.538163 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-config-data\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.538258 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-config-data\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.539240 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-combined-ca-bundle\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.539409 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbln\" (UniqueName: \"kubernetes.io/projected/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-kube-api-access-jzbln\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.539508 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83eba727-cd44-4013-8ce3-5672f4f7f595-logs\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.539587 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.539670 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-combined-ca-bundle\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.539195 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-scripts\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.539760 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqpjs\" (UniqueName: \"kubernetes.io/projected/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-kube-api-access-xqpjs\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.538718 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f11c62-caea-4b02-9a66-6c385a3b93c0-logs\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.540311 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-config\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.540540 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.540633 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.540789 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.540867 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-scripts\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.540900 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrz4j\" (UniqueName: \"kubernetes.io/projected/c1f11c62-caea-4b02-9a66-6c385a3b93c0-kube-api-access-jrz4j\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.541025 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-db-sync-config-data\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.541073 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1f11c62-caea-4b02-9a66-6c385a3b93c0-horizon-secret-key\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.541380 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-config-data\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.557458 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.565681 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.575080 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1f11c62-caea-4b02-9a66-6c385a3b93c0-horizon-secret-key\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.582770 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrz4j\" (UniqueName: \"kubernetes.io/projected/c1f11c62-caea-4b02-9a66-6c385a3b93c0-kube-api-access-jrz4j\") pod \"horizon-5bfbc56cc-98l48\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.605872 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.620975 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.641993 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.643825 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.646630 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647626 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647673 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzm85\" (UniqueName: \"kubernetes.io/projected/83eba727-cd44-4013-8ce3-5672f4f7f595-kube-api-access-nzm85\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647704 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-config-data\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647735 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-combined-ca-bundle\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647758 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647779 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzbln\" (UniqueName: \"kubernetes.io/projected/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-kube-api-access-jzbln\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647804 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83eba727-cd44-4013-8ce3-5672f4f7f595-logs\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647819 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647838 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-combined-ca-bundle\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647860 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqpjs\" (UniqueName: \"kubernetes.io/projected/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-kube-api-access-xqpjs\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647880 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-config\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647905 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-scripts\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647929 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647949 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647966 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.647989 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.648011 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-logs\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.648038 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-scripts\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.648059 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.648105 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-config-data\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.648121 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-db-sync-config-data\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.648149 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptg4m\" (UniqueName: \"kubernetes.io/projected/67179f66-3806-4c95-b46a-858e6ad7575b-kube-api-access-ptg4m\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.649534 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.650245 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.651680 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.664133 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-config-data\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.683408 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.696851 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.697216 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83eba727-cd44-4013-8ce3-5672f4f7f595-logs\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.697743 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.699950 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-config\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.701213 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-combined-ca-bundle\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.704469 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzm85\" (UniqueName: \"kubernetes.io/projected/83eba727-cd44-4013-8ce3-5672f4f7f595-kube-api-access-nzm85\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.704789 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-scripts\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.711181 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-combined-ca-bundle\") pod \"placement-db-sync-hd5nf\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.713720 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-db-sync-config-data\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.725604 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749163 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749243 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-scripts\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749283 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749308 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-logs\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749339 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749372 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-config-data\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749397 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptg4m\" (UniqueName: \"kubernetes.io/projected/67179f66-3806-4c95-b46a-858e6ad7575b-kube-api-access-ptg4m\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.749421 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.751388 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.757339 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqpjs\" (UniqueName: \"kubernetes.io/projected/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-kube-api-access-xqpjs\") pod \"barbican-db-sync-r9fp5\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.760606 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.761182 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-logs\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.766396 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-scripts\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.769978 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.771757 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptg4m\" (UniqueName: \"kubernetes.io/projected/67179f66-3806-4c95-b46a-858e6ad7575b-kube-api-access-ptg4m\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.774548 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.774794 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzbln\" (UniqueName: \"kubernetes.io/projected/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-kube-api-access-jzbln\") pod \"dnsmasq-dns-785d8bcb8c-2cfjb\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.776038 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-config-data\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.847563 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858397 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858457 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858478 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-logs\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858500 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858521 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqt2t\" (UniqueName: \"kubernetes.io/projected/40a0ce8a-9011-4054-9988-bf6d9522caa4-kube-api-access-sqt2t\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858561 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858828 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.858851 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.869777 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.905999 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.946768 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hd5nf" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961045 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961087 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961113 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-logs\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961134 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961153 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqt2t\" (UniqueName: \"kubernetes.io/projected/40a0ce8a-9011-4054-9988-bf6d9522caa4-kube-api-access-sqt2t\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961172 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961209 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961224 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961798 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.961913 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.965194 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-logs\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.976070 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.981334 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.987334 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.987908 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.987977 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqt2t\" (UniqueName: \"kubernetes.io/projected/40a0ce8a-9011-4054-9988-bf6d9522caa4-kube-api-access-sqt2t\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:41 crc kubenswrapper[4767]: I1124 21:55:41.990537 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:42 crc kubenswrapper[4767]: W1124 21:55:42.073457 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd0c4792_3d17_4ccf_893f_1ceb2eb17d35.slice/crio-e27cb5d34214a2a2ed79322a07e33cb928b1fe897641dd3254cb8b3f8b732f15 WatchSource:0}: Error finding container e27cb5d34214a2a2ed79322a07e33cb928b1fe897641dd3254cb8b3f8b732f15: Status 404 returned error can't find the container with id e27cb5d34214a2a2ed79322a07e33cb928b1fe897641dd3254cb8b3f8b732f15 Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.087820 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.089235 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.132227 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sz58w"] Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.154776 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xt8r4"] Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.293961 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.406923 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz58w" event={"ID":"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35","Type":"ContainerStarted","Data":"e27cb5d34214a2a2ed79322a07e33cb928b1fe897641dd3254cb8b3f8b732f15"} Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.414554 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd362fd6-aa93-46af-b11d-042876cf1554","Type":"ContainerStarted","Data":"021df0605bbf28ae221b96d08a3a18606fb47e0b54f9a344eda2c20fe416b33b"} Nov 24 21:55:42 crc kubenswrapper[4767]: I1124 21:55:42.417197 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" event={"ID":"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d","Type":"ContainerStarted","Data":"80613b97457505524f6d3cf663d3ec832ad7c4fddd5cba3479aa57d348178d7e"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:42.607496 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tzcqj"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:42.639799 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:42.680090 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bfbc56cc-98l48"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:42.696390 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:42.708532 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77896db6b9-8mlpx"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:42.959681 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:42.984205 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.001138 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77896db6b9-8mlpx"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.034073 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fcf7cc567-vhj2w"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.036756 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.059722 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.069450 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fcf7cc567-vhj2w"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.221714 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqd9\" (UniqueName: \"kubernetes.io/projected/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-kube-api-access-dpqd9\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.221827 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-horizon-secret-key\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.221848 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-scripts\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.221872 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-config-data\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.222623 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-logs\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.327464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-horizon-secret-key\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.327510 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-scripts\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.327534 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-config-data\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.327605 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-logs\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.327643 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqd9\" (UniqueName: \"kubernetes.io/projected/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-kube-api-access-dpqd9\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.330715 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-config-data\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.331124 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-scripts\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.331194 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-logs\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.334294 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-horizon-secret-key\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.345515 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqd9\" (UniqueName: \"kubernetes.io/projected/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-kube-api-access-dpqd9\") pod \"horizon-5fcf7cc567-vhj2w\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.400501 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.429989 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f7073226-245a-41db-80c3-f30102363ae1","Type":"ContainerStarted","Data":"172e5cdacce2cc16e3efee62c81c2839605064f6a6e16dce4dcb058f4ea995e4"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.452185 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77896db6b9-8mlpx" event={"ID":"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3","Type":"ContainerStarted","Data":"c6ae6d294f2cec44e1c7bdc93f166d87036f86502112f83b5b3a0b1b9cd1f566"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.454967 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.469655 4767 generic.go:334] "Generic (PLEG): container finished" podID="5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" containerID="2b260f936e4decbe78c52d1f1cd98017b5fcc2e56f1f7b8e50ad2442b105e5fc" exitCode=0 Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.469718 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" event={"ID":"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d","Type":"ContainerDied","Data":"2b260f936e4decbe78c52d1f1cd98017b5fcc2e56f1f7b8e50ad2442b105e5fc"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.475753 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lc8sg"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.500213 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bfbc56cc-98l48" event={"ID":"c1f11c62-caea-4b02-9a66-6c385a3b93c0","Type":"ContainerStarted","Data":"eb1ee7ee3d875120c9a3b3584a4986edd443bc1bbeb99e87496d09b66088956f"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.519818 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2cfjb"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.530631 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz58w" event={"ID":"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35","Type":"ContainerStarted","Data":"c5cede5ea26ecf48759374285f4500a72b56290d5a6897c73460e5862105b6a7"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.542947 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tzcqj" event={"ID":"128eda36-f009-47c2-8939-73ec23da0d4c","Type":"ContainerStarted","Data":"7165f9d5c3e3786af10718f074fbc95eaf35f2740f343dbf58ad7e04dda46879"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.544567 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.545025 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9af43afe-d337-48a3-a1ec-568b83802765","Type":"ContainerStarted","Data":"c45b38293ca7e57ff55daeff944b8407c979da4ebe2ded94711e7ad85868ea38"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.545060 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9af43afe-d337-48a3-a1ec-568b83802765","Type":"ContainerStarted","Data":"d484e8619bb36dd3aca1e056c18184a6e0ceb6f334c110f87715c8e610488876"} Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.574619 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-sz58w" podStartSLOduration=3.574601934 podStartE2EDuration="3.574601934s" podCreationTimestamp="2025-11-24 21:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:43.547468554 +0000 UTC m=+1026.464451926" watchObservedRunningTime="2025-11-24 21:55:43.574601934 +0000 UTC m=+1026.491585306" Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.594321 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.603468 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hd5nf"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.611635 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-r9fp5"] Nov 24 21:55:43 crc kubenswrapper[4767]: I1124 21:55:43.777024 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.597559 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.602551 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" event={"ID":"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d","Type":"ContainerDied","Data":"80613b97457505524f6d3cf663d3ec832ad7c4fddd5cba3479aa57d348178d7e"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.602615 4767 scope.go:117] "RemoveContainer" containerID="2b260f936e4decbe78c52d1f1cd98017b5fcc2e56f1f7b8e50ad2442b105e5fc" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.607436 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hd5nf" event={"ID":"83eba727-cd44-4013-8ce3-5672f4f7f595","Type":"ContainerStarted","Data":"5cfe502469f5930b5bfd39de360d3f90a715dc5cec8d4446ce3a77b2b6635a36"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.610502 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerStarted","Data":"47c30100f32fbbd9eafec906d9f57abb1c34b0e69fa284bd02cf9ce91bb2b03f"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.614215 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lc8sg" event={"ID":"92996c14-829b-4668-b74f-42e672f1b9b3","Type":"ContainerStarted","Data":"98faa3c52197d9a96bfaa9d89a613aabd4a908393eed7e93d24ffc558ca7986a"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.620653 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40a0ce8a-9011-4054-9988-bf6d9522caa4","Type":"ContainerStarted","Data":"1ff6e5d8441925272a5425bb6fbecb3fcc65ac4f9d6d9c9fff3f4f6f6c78eef1"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.622402 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" event={"ID":"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0","Type":"ContainerStarted","Data":"c513fbeaba569fb1da7c4e331a36feb4b0ef93a8185ff88576c0aae966d57826"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.640670 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67179f66-3806-4c95-b46a-858e6ad7575b","Type":"ContainerStarted","Data":"b1c2a234a2ca216a8d7bb9b21957cce51c12444cdef75e9f922aa0681d438efa"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.662584 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9fp5" event={"ID":"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703","Type":"ContainerStarted","Data":"74bfa38559c16c38a429150ddd4004bd83260c1d928270ecd961782830fd943c"} Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.770448 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpl96\" (UniqueName: \"kubernetes.io/projected/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-kube-api-access-qpl96\") pod \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.770508 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-nb\") pod \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.770551 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-sb\") pod \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.770649 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-config\") pod \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.770682 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-svc\") pod \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.770717 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-swift-storage-0\") pod \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\" (UID: \"5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d\") " Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.777983 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-kube-api-access-qpl96" (OuterVolumeSpecName: "kube-api-access-qpl96") pod "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" (UID: "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d"). InnerVolumeSpecName "kube-api-access-qpl96". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.802023 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" (UID: "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.802598 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" (UID: "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.806320 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" (UID: "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.807657 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" (UID: "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.815399 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-config" (OuterVolumeSpecName: "config") pod "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" (UID: "5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.875200 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.875591 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.875603 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.875616 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpl96\" (UniqueName: \"kubernetes.io/projected/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-kube-api-access-qpl96\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.875625 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.875633 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:44 crc kubenswrapper[4767]: I1124 21:55:44.929763 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fcf7cc567-vhj2w"] Nov 24 21:55:45 crc kubenswrapper[4767]: W1124 21:55:45.100587 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod583d56e4_c8bb_4f8e_9d6c_8623c078a1b6.slice/crio-23e86a55b3deca8b3396235ccb86e7e67d10f5e4cfcdff236e377059e8eea318 WatchSource:0}: Error finding container 23e86a55b3deca8b3396235ccb86e7e67d10f5e4cfcdff236e377059e8eea318: Status 404 returned error can't find the container with id 23e86a55b3deca8b3396235ccb86e7e67d10f5e4cfcdff236e377059e8eea318 Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.682840 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67179f66-3806-4c95-b46a-858e6ad7575b","Type":"ContainerStarted","Data":"261c1847bbee7e49c289aeb51d9411c250fdd6939fdea3cd1a37f8988a8d7575"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.686848 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lc8sg" event={"ID":"92996c14-829b-4668-b74f-42e672f1b9b3","Type":"ContainerStarted","Data":"e60977c789ead8b141e42c27319cf77ce4315398c54b033209d9239eb062d0d4"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.694169 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40a0ce8a-9011-4054-9988-bf6d9522caa4","Type":"ContainerStarted","Data":"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.697176 4767 generic.go:334] "Generic (PLEG): container finished" podID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerID="a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb" exitCode=0 Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.697251 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" event={"ID":"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0","Type":"ContainerDied","Data":"a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.707647 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd362fd6-aa93-46af-b11d-042876cf1554","Type":"ContainerStarted","Data":"a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.710457 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xt8r4" Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.711733 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fcf7cc567-vhj2w" event={"ID":"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6","Type":"ContainerStarted","Data":"23e86a55b3deca8b3396235ccb86e7e67d10f5e4cfcdff236e377059e8eea318"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.712786 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lc8sg" podStartSLOduration=5.712774252 podStartE2EDuration="5.712774252s" podCreationTimestamp="2025-11-24 21:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:45.705111784 +0000 UTC m=+1028.622095176" watchObservedRunningTime="2025-11-24 21:55:45.712774252 +0000 UTC m=+1028.629757624" Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.716975 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9af43afe-d337-48a3-a1ec-568b83802765","Type":"ContainerStarted","Data":"cfdf82a508d9116de96afd602ec8e8eb0e4e52fec42991df41fb5de1ad453088"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.717134 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api-log" containerID="cri-o://c45b38293ca7e57ff55daeff944b8407c979da4ebe2ded94711e7ad85868ea38" gracePeriod=30 Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.717186 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" containerID="cri-o://cfdf82a508d9116de96afd602ec8e8eb0e4e52fec42991df41fb5de1ad453088" gracePeriod=30 Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.717251 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.732718 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f7073226-245a-41db-80c3-f30102363ae1","Type":"ContainerStarted","Data":"af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094"} Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.762078 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.439090807 podStartE2EDuration="5.76203828s" podCreationTimestamp="2025-11-24 21:55:40 +0000 UTC" firstStartedPulling="2025-11-24 21:55:42.162298456 +0000 UTC m=+1025.079281828" lastFinishedPulling="2025-11-24 21:55:44.485245929 +0000 UTC m=+1027.402229301" observedRunningTime="2025-11-24 21:55:45.757172572 +0000 UTC m=+1028.674155944" watchObservedRunningTime="2025-11-24 21:55:45.76203828 +0000 UTC m=+1028.679021652" Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.767935 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": EOF" Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.785010 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.784995232 podStartE2EDuration="5.784995232s" podCreationTimestamp="2025-11-24 21:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:45.783206431 +0000 UTC m=+1028.700189803" watchObservedRunningTime="2025-11-24 21:55:45.784995232 +0000 UTC m=+1028.701978604" Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.869534 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xt8r4"] Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.874062 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xt8r4"] Nov 24 21:55:45 crc kubenswrapper[4767]: I1124 21:55:45.882430 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.016051628 podStartE2EDuration="5.882412338s" podCreationTimestamp="2025-11-24 21:55:40 +0000 UTC" firstStartedPulling="2025-11-24 21:55:42.652667478 +0000 UTC m=+1025.569650850" lastFinishedPulling="2025-11-24 21:55:44.519028188 +0000 UTC m=+1027.436011560" observedRunningTime="2025-11-24 21:55:45.850204564 +0000 UTC m=+1028.767187936" watchObservedRunningTime="2025-11-24 21:55:45.882412338 +0000 UTC m=+1028.799395710" Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.124426 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.252139 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.324949 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" path="/var/lib/kubelet/pods/5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d/volumes" Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.749740 4767 generic.go:334] "Generic (PLEG): container finished" podID="cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" containerID="c5cede5ea26ecf48759374285f4500a72b56290d5a6897c73460e5862105b6a7" exitCode=0 Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.749906 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz58w" event={"ID":"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35","Type":"ContainerDied","Data":"c5cede5ea26ecf48759374285f4500a72b56290d5a6897c73460e5862105b6a7"} Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.767816 4767 generic.go:334] "Generic (PLEG): container finished" podID="9af43afe-d337-48a3-a1ec-568b83802765" containerID="c45b38293ca7e57ff55daeff944b8407c979da4ebe2ded94711e7ad85868ea38" exitCode=143 Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.767915 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9af43afe-d337-48a3-a1ec-568b83802765","Type":"ContainerDied","Data":"c45b38293ca7e57ff55daeff944b8407c979da4ebe2ded94711e7ad85868ea38"} Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.770817 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" event={"ID":"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0","Type":"ContainerStarted","Data":"25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3"} Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.770987 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.774561 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-log" containerID="cri-o://631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80" gracePeriod=30 Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.774576 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-httpd" containerID="cri-o://eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77" gracePeriod=30 Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.813695 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.813658737 podStartE2EDuration="5.813658737s" podCreationTimestamp="2025-11-24 21:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:46.812321829 +0000 UTC m=+1029.729305201" watchObservedRunningTime="2025-11-24 21:55:46.813658737 +0000 UTC m=+1029.730642119" Nov 24 21:55:46 crc kubenswrapper[4767]: I1124 21:55:46.840983 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" podStartSLOduration=5.840958752 podStartE2EDuration="5.840958752s" podCreationTimestamp="2025-11-24 21:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:46.828940861 +0000 UTC m=+1029.745924223" watchObservedRunningTime="2025-11-24 21:55:46.840958752 +0000 UTC m=+1029.757942134" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.456877 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.546755 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-logs\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.546903 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-combined-ca-bundle\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.546954 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-httpd-run\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.546985 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-scripts\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.547038 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-internal-tls-certs\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.547111 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.547172 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqt2t\" (UniqueName: \"kubernetes.io/projected/40a0ce8a-9011-4054-9988-bf6d9522caa4-kube-api-access-sqt2t\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.547202 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-config-data\") pod \"40a0ce8a-9011-4054-9988-bf6d9522caa4\" (UID: \"40a0ce8a-9011-4054-9988-bf6d9522caa4\") " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.547793 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-logs" (OuterVolumeSpecName: "logs") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.548175 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.552457 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.555408 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.558442 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-scripts" (OuterVolumeSpecName: "scripts") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.558453 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a0ce8a-9011-4054-9988-bf6d9522caa4-kube-api-access-sqt2t" (OuterVolumeSpecName: "kube-api-access-sqt2t") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "kube-api-access-sqt2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.601417 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.626753 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.643980 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-config-data" (OuterVolumeSpecName: "config-data") pod "40a0ce8a-9011-4054-9988-bf6d9522caa4" (UID: "40a0ce8a-9011-4054-9988-bf6d9522caa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.649896 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.649931 4767 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.650047 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.650067 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqt2t\" (UniqueName: \"kubernetes.io/projected/40a0ce8a-9011-4054-9988-bf6d9522caa4-kube-api-access-sqt2t\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.650079 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.650126 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a0ce8a-9011-4054-9988-bf6d9522caa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.650139 4767 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40a0ce8a-9011-4054-9988-bf6d9522caa4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.681362 4767 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.752291 4767 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.783984 4767 generic.go:334] "Generic (PLEG): container finished" podID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerID="eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77" exitCode=143 Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.784012 4767 generic.go:334] "Generic (PLEG): container finished" podID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerID="631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80" exitCode=143 Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.784054 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40a0ce8a-9011-4054-9988-bf6d9522caa4","Type":"ContainerDied","Data":"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77"} Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.784086 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40a0ce8a-9011-4054-9988-bf6d9522caa4","Type":"ContainerDied","Data":"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80"} Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.784099 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40a0ce8a-9011-4054-9988-bf6d9522caa4","Type":"ContainerDied","Data":"1ff6e5d8441925272a5425bb6fbecb3fcc65ac4f9d6d9c9fff3f4f6f6c78eef1"} Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.784117 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.784134 4767 scope.go:117] "RemoveContainer" containerID="eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.793934 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67179f66-3806-4c95-b46a-858e6ad7575b","Type":"ContainerStarted","Data":"46d7f4226950d813eca43d440e5e2c7b85f580e40e191d317d23551926d62ccf"} Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.794057 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-log" containerID="cri-o://261c1847bbee7e49c289aeb51d9411c250fdd6939fdea3cd1a37f8988a8d7575" gracePeriod=30 Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.794108 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-httpd" containerID="cri-o://46d7f4226950d813eca43d440e5e2c7b85f580e40e191d317d23551926d62ccf" gracePeriod=30 Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.824823 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.824800896 podStartE2EDuration="6.824800896s" podCreationTimestamp="2025-11-24 21:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:55:47.815687197 +0000 UTC m=+1030.732670569" watchObservedRunningTime="2025-11-24 21:55:47.824800896 +0000 UTC m=+1030.741784268" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.834874 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.840916 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.869852 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:47 crc kubenswrapper[4767]: E1124 21:55:47.870324 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-log" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.870341 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-log" Nov 24 21:55:47 crc kubenswrapper[4767]: E1124 21:55:47.870374 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" containerName="init" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.870382 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" containerName="init" Nov 24 21:55:47 crc kubenswrapper[4767]: E1124 21:55:47.870398 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-httpd" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.870407 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-httpd" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.870654 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-httpd" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.870667 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3ed96f-bb26-4e7c-b94c-db410d4c8e7d" containerName="init" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.870679 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" containerName="glance-log" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.871953 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.877872 4767 scope.go:117] "RemoveContainer" containerID="631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.878623 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.878896 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.887183 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955620 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vl4r\" (UniqueName: \"kubernetes.io/projected/deb2678c-0ca9-48c9-952d-a0933f8dc512-kube-api-access-9vl4r\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955705 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955737 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955770 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955819 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955873 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955921 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:47 crc kubenswrapper[4767]: I1124 21:55:47.955954 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.016389 4767 scope.go:117] "RemoveContainer" containerID="eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77" Nov 24 21:55:48 crc kubenswrapper[4767]: E1124 21:55:48.016883 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77\": container with ID starting with eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77 not found: ID does not exist" containerID="eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.016941 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77"} err="failed to get container status \"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77\": rpc error: code = NotFound desc = could not find container \"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77\": container with ID starting with eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77 not found: ID does not exist" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.016968 4767 scope.go:117] "RemoveContainer" containerID="631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80" Nov 24 21:55:48 crc kubenswrapper[4767]: E1124 21:55:48.017574 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80\": container with ID starting with 631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80 not found: ID does not exist" containerID="631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.017627 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80"} err="failed to get container status \"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80\": rpc error: code = NotFound desc = could not find container \"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80\": container with ID starting with 631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80 not found: ID does not exist" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.017645 4767 scope.go:117] "RemoveContainer" containerID="eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.018197 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77"} err="failed to get container status \"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77\": rpc error: code = NotFound desc = could not find container \"eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77\": container with ID starting with eef56d7d99d97f06cef5c5bd9ef06dbf56bdac177e71f5a1c4222c48aae8ca77 not found: ID does not exist" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.018238 4767 scope.go:117] "RemoveContainer" containerID="631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.018572 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80"} err="failed to get container status \"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80\": rpc error: code = NotFound desc = could not find container \"631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80\": container with ID starting with 631a4ee4009e493d094deaac2e7d417cec48430e2b280445a4cc6ed59f7bfd80 not found: ID does not exist" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.057860 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.057966 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.058019 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.058051 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vl4r\" (UniqueName: \"kubernetes.io/projected/deb2678c-0ca9-48c9-952d-a0933f8dc512-kube-api-access-9vl4r\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.058101 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.058123 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.058162 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.058203 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.058348 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.061815 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.062150 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.070530 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.091238 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vl4r\" (UniqueName: \"kubernetes.io/projected/deb2678c-0ca9-48c9-952d-a0933f8dc512-kube-api-access-9vl4r\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.092241 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.105017 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.108490 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.151000 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.217946 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.263851 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-scripts\") pod \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.263943 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-fernet-keys\") pod \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.263969 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvdv6\" (UniqueName: \"kubernetes.io/projected/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-kube-api-access-qvdv6\") pod \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.263988 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-config-data\") pod \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.264025 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-credential-keys\") pod \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.264100 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-combined-ca-bundle\") pod \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\" (UID: \"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35\") " Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.273199 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-kube-api-access-qvdv6" (OuterVolumeSpecName: "kube-api-access-qvdv6") pod "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" (UID: "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35"). InnerVolumeSpecName "kube-api-access-qvdv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.295583 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.303619 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-scripts" (OuterVolumeSpecName: "scripts") pod "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" (UID: "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.305207 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" (UID: "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.307350 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" (UID: "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.325793 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-config-data" (OuterVolumeSpecName: "config-data") pod "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" (UID: "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.330590 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" (UID: "cd0c4792-3d17-4ccf-893f-1ceb2eb17d35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.354498 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a0ce8a-9011-4054-9988-bf6d9522caa4" path="/var/lib/kubelet/pods/40a0ce8a-9011-4054-9988-bf6d9522caa4/volumes" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.366552 4767 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.366573 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.366582 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.366590 4767 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.366599 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvdv6\" (UniqueName: \"kubernetes.io/projected/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-kube-api-access-qvdv6\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.366608 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.820640 4767 generic.go:334] "Generic (PLEG): container finished" podID="67179f66-3806-4c95-b46a-858e6ad7575b" containerID="46d7f4226950d813eca43d440e5e2c7b85f580e40e191d317d23551926d62ccf" exitCode=0 Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.820902 4767 generic.go:334] "Generic (PLEG): container finished" podID="67179f66-3806-4c95-b46a-858e6ad7575b" containerID="261c1847bbee7e49c289aeb51d9411c250fdd6939fdea3cd1a37f8988a8d7575" exitCode=143 Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.820943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67179f66-3806-4c95-b46a-858e6ad7575b","Type":"ContainerDied","Data":"46d7f4226950d813eca43d440e5e2c7b85f580e40e191d317d23551926d62ccf"} Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.820969 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67179f66-3806-4c95-b46a-858e6ad7575b","Type":"ContainerDied","Data":"261c1847bbee7e49c289aeb51d9411c250fdd6939fdea3cd1a37f8988a8d7575"} Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.826088 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz58w" event={"ID":"cd0c4792-3d17-4ccf-893f-1ceb2eb17d35","Type":"ContainerDied","Data":"e27cb5d34214a2a2ed79322a07e33cb928b1fe897641dd3254cb8b3f8b732f15"} Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.826128 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27cb5d34214a2a2ed79322a07e33cb928b1fe897641dd3254cb8b3f8b732f15" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.826181 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz58w" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.867607 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-sz58w"] Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.876194 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-sz58w"] Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.954139 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2k8wb"] Nov 24 21:55:48 crc kubenswrapper[4767]: E1124 21:55:48.954554 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" containerName="keystone-bootstrap" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.955850 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" containerName="keystone-bootstrap" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.956066 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" containerName="keystone-bootstrap" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.956697 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.961822 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.961956 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.962046 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.962253 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.962378 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbsgd" Nov 24 21:55:48 crc kubenswrapper[4767]: I1124 21:55:48.967300 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2k8wb"] Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.079897 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-scripts\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.079950 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-config-data\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.079980 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-combined-ca-bundle\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.080034 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlswg\" (UniqueName: \"kubernetes.io/projected/54aafebf-445c-4632-81c3-1f35b84a4ef7-kube-api-access-wlswg\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.080054 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-fernet-keys\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.080070 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-credential-keys\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.181584 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-scripts\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.181645 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-config-data\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.181676 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-combined-ca-bundle\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.181730 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlswg\" (UniqueName: \"kubernetes.io/projected/54aafebf-445c-4632-81c3-1f35b84a4ef7-kube-api-access-wlswg\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.181757 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-fernet-keys\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.181776 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-credential-keys\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.190041 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-credential-keys\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.190082 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-combined-ca-bundle\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.190291 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-scripts\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.201229 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-config-data\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.201343 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-fernet-keys\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.204485 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlswg\" (UniqueName: \"kubernetes.io/projected/54aafebf-445c-4632-81c3-1f35b84a4ef7-kube-api-access-wlswg\") pod \"keystone-bootstrap-2k8wb\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:49 crc kubenswrapper[4767]: I1124 21:55:49.284811 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.335363 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0c4792-3d17-4ccf-893f-1ceb2eb17d35" path="/var/lib/kubelet/pods/cd0c4792-3d17-4ccf-893f-1ceb2eb17d35/volumes" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.339690 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bfbc56cc-98l48"] Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.362443 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d69c9d5c6-qr8nq"] Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.379901 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.384255 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.410068 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-secret-key\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.410178 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-tls-certs\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.410350 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5t4\" (UniqueName: \"kubernetes.io/projected/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-kube-api-access-mv5t4\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.410378 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-combined-ca-bundle\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.410399 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-logs\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.410432 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-scripts\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.410648 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-config-data\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.427312 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d69c9d5c6-qr8nq"] Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.458770 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fcf7cc567-vhj2w"] Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.468946 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.484970 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-567c96d68-4rmbm"] Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.486605 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.492537 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-567c96d68-4rmbm"] Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517082 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-tls-certs\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517133 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-scripts\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517166 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-combined-ca-bundle\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517196 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-logs\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517379 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr55f\" (UniqueName: \"kubernetes.io/projected/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-kube-api-access-zr55f\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517443 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-config-data\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517495 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5t4\" (UniqueName: \"kubernetes.io/projected/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-kube-api-access-mv5t4\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517526 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-combined-ca-bundle\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517557 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-logs\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517625 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-scripts\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517676 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-horizon-tls-certs\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517784 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-horizon-secret-key\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517815 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-config-data\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.517868 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-secret-key\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.616604 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-tls-certs\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.617159 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-scripts\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.617388 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-logs\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.619884 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-combined-ca-bundle\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620046 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-config-data\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620379 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-horizon-tls-certs\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620474 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-horizon-secret-key\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620592 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-scripts\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620639 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-combined-ca-bundle\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620710 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-logs\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620788 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr55f\" (UniqueName: \"kubernetes.io/projected/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-kube-api-access-zr55f\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.620822 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-config-data\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.621457 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-scripts\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.621741 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-logs\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.622388 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-secret-key\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.623260 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-horizon-tls-certs\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.623405 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5t4\" (UniqueName: \"kubernetes.io/projected/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-kube-api-access-mv5t4\") pod \"horizon-6d69c9d5c6-qr8nq\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.623682 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-config-data\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.638230 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-combined-ca-bundle\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.638772 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-horizon-secret-key\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.641718 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr55f\" (UniqueName: \"kubernetes.io/projected/f3a751ba-fb23-4cd3-a1f7-2c843e04ab47-kube-api-access-zr55f\") pod \"horizon-567c96d68-4rmbm\" (UID: \"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47\") " pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.800519 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:55:50 crc kubenswrapper[4767]: I1124 21:55:50.812941 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.106447 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.117846 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.141531 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.164326 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.255376 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.858862 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.873060 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.904488 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.912056 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.938351 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-fc7bm"] Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.939100 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerName="dnsmasq-dns" containerID="cri-o://fc889eea10c52d65d24c1fe8834229472e49a611c082efc1d6af7208f3b05088" gracePeriod=10 Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.973438 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:55:51 crc kubenswrapper[4767]: I1124 21:55:51.995860 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:55:52 crc kubenswrapper[4767]: I1124 21:55:52.199381 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": read tcp 10.217.0.2:55740->10.217.0.149:9322: read: connection reset by peer" Nov 24 21:55:52 crc kubenswrapper[4767]: I1124 21:55:52.875488 4767 generic.go:334] "Generic (PLEG): container finished" podID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerID="fc889eea10c52d65d24c1fe8834229472e49a611c082efc1d6af7208f3b05088" exitCode=0 Nov 24 21:55:52 crc kubenswrapper[4767]: I1124 21:55:52.875580 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" event={"ID":"f97d9980-2ced-4225-b125-cfffc3f605c9","Type":"ContainerDied","Data":"fc889eea10c52d65d24c1fe8834229472e49a611c082efc1d6af7208f3b05088"} Nov 24 21:55:52 crc kubenswrapper[4767]: I1124 21:55:52.878539 4767 generic.go:334] "Generic (PLEG): container finished" podID="9af43afe-d337-48a3-a1ec-568b83802765" containerID="cfdf82a508d9116de96afd602ec8e8eb0e4e52fec42991df41fb5de1ad453088" exitCode=0 Nov 24 21:55:52 crc kubenswrapper[4767]: I1124 21:55:52.880048 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9af43afe-d337-48a3-a1ec-568b83802765","Type":"ContainerDied","Data":"cfdf82a508d9116de96afd602ec8e8eb0e4e52fec42991df41fb5de1ad453088"} Nov 24 21:55:52 crc kubenswrapper[4767]: I1124 21:55:52.913245 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.699941 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.797544 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-nb\") pod \"f97d9980-2ced-4225-b125-cfffc3f605c9\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.797827 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-swift-storage-0\") pod \"f97d9980-2ced-4225-b125-cfffc3f605c9\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.797983 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhqxx\" (UniqueName: \"kubernetes.io/projected/f97d9980-2ced-4225-b125-cfffc3f605c9-kube-api-access-rhqxx\") pod \"f97d9980-2ced-4225-b125-cfffc3f605c9\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.798022 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-svc\") pod \"f97d9980-2ced-4225-b125-cfffc3f605c9\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.798069 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-config\") pod \"f97d9980-2ced-4225-b125-cfffc3f605c9\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.798091 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-sb\") pod \"f97d9980-2ced-4225-b125-cfffc3f605c9\" (UID: \"f97d9980-2ced-4225-b125-cfffc3f605c9\") " Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.804083 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97d9980-2ced-4225-b125-cfffc3f605c9-kube-api-access-rhqxx" (OuterVolumeSpecName: "kube-api-access-rhqxx") pod "f97d9980-2ced-4225-b125-cfffc3f605c9" (UID: "f97d9980-2ced-4225-b125-cfffc3f605c9"). InnerVolumeSpecName "kube-api-access-rhqxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.842066 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f97d9980-2ced-4225-b125-cfffc3f605c9" (UID: "f97d9980-2ced-4225-b125-cfffc3f605c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.843393 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-config" (OuterVolumeSpecName: "config") pod "f97d9980-2ced-4225-b125-cfffc3f605c9" (UID: "f97d9980-2ced-4225-b125-cfffc3f605c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.847666 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f97d9980-2ced-4225-b125-cfffc3f605c9" (UID: "f97d9980-2ced-4225-b125-cfffc3f605c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.848802 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f97d9980-2ced-4225-b125-cfffc3f605c9" (UID: "f97d9980-2ced-4225-b125-cfffc3f605c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.858818 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f97d9980-2ced-4225-b125-cfffc3f605c9" (UID: "f97d9980-2ced-4225-b125-cfffc3f605c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.890714 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" event={"ID":"f97d9980-2ced-4225-b125-cfffc3f605c9","Type":"ContainerDied","Data":"a487a1c893786e0a8d4b998ca505be14ccf30b359dd8112fdd9db7313db10f13"} Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.890769 4767 scope.go:117] "RemoveContainer" containerID="fc889eea10c52d65d24c1fe8834229472e49a611c082efc1d6af7208f3b05088" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.890775 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-fc7bm" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.890879 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" containerID="cri-o://af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" gracePeriod=30 Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.891212 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="cd362fd6-aa93-46af-b11d-042876cf1554" containerName="watcher-decision-engine" containerID="cri-o://a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118" gracePeriod=30 Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.900307 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhqxx\" (UniqueName: \"kubernetes.io/projected/f97d9980-2ced-4225-b125-cfffc3f605c9-kube-api-access-rhqxx\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.900338 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.900348 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.900359 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.900373 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.900385 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97d9980-2ced-4225-b125-cfffc3f605c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.928688 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-fc7bm"] Nov 24 21:55:53 crc kubenswrapper[4767]: I1124 21:55:53.938478 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-fc7bm"] Nov 24 21:55:54 crc kubenswrapper[4767]: I1124 21:55:54.322933 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" path="/var/lib/kubelet/pods/f97d9980-2ced-4225-b125-cfffc3f605c9/volumes" Nov 24 21:55:56 crc kubenswrapper[4767]: E1124 21:55:56.119534 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:55:56 crc kubenswrapper[4767]: E1124 21:55:56.121631 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:55:56 crc kubenswrapper[4767]: E1124 21:55:56.122697 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:55:56 crc kubenswrapper[4767]: E1124 21:55:56.122812 4767 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" Nov 24 21:55:56 crc kubenswrapper[4767]: I1124 21:55:56.211287 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Nov 24 21:55:57 crc kubenswrapper[4767]: I1124 21:55:57.926594 4767 generic.go:334] "Generic (PLEG): container finished" podID="f7073226-245a-41db-80c3-f30102363ae1" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" exitCode=0 Nov 24 21:55:57 crc kubenswrapper[4767]: I1124 21:55:57.926690 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f7073226-245a-41db-80c3-f30102363ae1","Type":"ContainerDied","Data":"af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094"} Nov 24 21:55:58 crc kubenswrapper[4767]: I1124 21:55:58.938036 4767 generic.go:334] "Generic (PLEG): container finished" podID="cd362fd6-aa93-46af-b11d-042876cf1554" containerID="a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118" exitCode=0 Nov 24 21:55:58 crc kubenswrapper[4767]: I1124 21:55:58.938063 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd362fd6-aa93-46af-b11d-042876cf1554","Type":"ContainerDied","Data":"a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118"} Nov 24 21:56:01 crc kubenswrapper[4767]: E1124 21:56:01.118466 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:01 crc kubenswrapper[4767]: E1124 21:56:01.119181 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:01 crc kubenswrapper[4767]: E1124 21:56:01.119508 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:01 crc kubenswrapper[4767]: E1124 21:56:01.119533 4767 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" Nov 24 21:56:01 crc kubenswrapper[4767]: I1124 21:56:01.211110 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.221862 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.222017 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97h678h95hf7h55bh656h589hfch5d9h5d4h545h564h554hfh567h668hb5h555h75h59dh86h78h66h9fh68h597h58bh54dh5ddhd7h59fh96q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zppmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77896db6b9-8mlpx_openstack(f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.224043 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-77896db6b9-8mlpx" podUID="f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.226004 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.226160 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fch5fdh688h649hf4h54ch5b4h59ch57dh5b9h649h9fhd4h88h7h687h5b7h57ch688h544h579h5d5h8fh569h65h545h5ddh5f6h57h67h75h5bcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrz4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5bfbc56cc-98l48_openstack(c1f11c62-caea-4b02-9a66-6c385a3b93c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.228978 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5bfbc56cc-98l48" podUID="c1f11c62-caea-4b02-9a66-6c385a3b93c0" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.278140 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.278387 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65ch644h667h687h68h57ch5cdh5fdhfbh67ch9fh5fdhdch599h668h66h67h56h57h576hd8h5d4h5bch6h5c4h5d7hdch97h686hcchb9h54q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpqd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5fcf7cc567-vhj2w_openstack(583d56e4-c8bb-4f8e-9d6c-8623c078a1b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:56:02 crc kubenswrapper[4767]: E1124 21:56:02.287229 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5fcf7cc567-vhj2w" podUID="583d56e4-c8bb-4f8e-9d6c-8623c078a1b6" Nov 24 21:56:04 crc kubenswrapper[4767]: I1124 21:56:04.001433 4767 generic.go:334] "Generic (PLEG): container finished" podID="92996c14-829b-4668-b74f-42e672f1b9b3" containerID="e60977c789ead8b141e42c27319cf77ce4315398c54b033209d9239eb062d0d4" exitCode=0 Nov 24 21:56:04 crc kubenswrapper[4767]: I1124 21:56:04.001525 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lc8sg" event={"ID":"92996c14-829b-4668-b74f-42e672f1b9b3","Type":"ContainerDied","Data":"e60977c789ead8b141e42c27319cf77ce4315398c54b033209d9239eb062d0d4"} Nov 24 21:56:06 crc kubenswrapper[4767]: E1124 21:56:06.118108 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:06 crc kubenswrapper[4767]: E1124 21:56:06.118981 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:06 crc kubenswrapper[4767]: E1124 21:56:06.119472 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:06 crc kubenswrapper[4767]: E1124 21:56:06.119534 4767 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" Nov 24 21:56:10 crc kubenswrapper[4767]: E1124 21:56:10.158782 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 21:56:10 crc kubenswrapper[4767]: E1124 21:56:10.159488 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h5ddh7fh66h598h558hb9h5bchc6h8fh5fbh54fh645h9ch587hfdh644h565h556hc9h64h659h5d6hd8h554hd7hc6h674h55dh5ffh74h5cfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkxc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d7a9ba0d-f67a-4887-82d8-3135cf56098a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:56:10 crc kubenswrapper[4767]: I1124 21:56:10.172046 4767 scope.go:117] "RemoveContainer" containerID="a8489feca93b2f16c904d5b239c8a9ff76dac7dabe88724959ce9843b095587a" Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.106707 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118 is running failed: container process not found" containerID="a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.107091 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118 is running failed: container process not found" containerID="a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.107468 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118 is running failed: container process not found" containerID="a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.107535 4767 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="cd362fd6-aa93-46af-b11d-042876cf1554" containerName="watcher-decision-engine" Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.118910 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.119386 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.119647 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.119677 4767 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.211463 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.296866 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.298321 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29fsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tzcqj_openstack(128eda36-f009-47c2-8939-73ec23da0d4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:56:11 crc kubenswrapper[4767]: E1124 21:56:11.299488 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tzcqj" podUID="128eda36-f009-47c2-8939-73ec23da0d4c" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.535772 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.572048 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.577236 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.606883 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.624711 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.637751 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.651493 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657243 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-public-tls-certs\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657351 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fttlv\" (UniqueName: \"kubernetes.io/projected/f7073226-245a-41db-80c3-f30102363ae1-kube-api-access-fttlv\") pod \"f7073226-245a-41db-80c3-f30102363ae1\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657386 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-horizon-secret-key\") pod \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657410 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-config-data\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657436 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-combined-ca-bundle\") pod \"f7073226-245a-41db-80c3-f30102363ae1\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657459 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-scripts\") pod \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657510 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43afe-d337-48a3-a1ec-568b83802765-logs\") pod \"9af43afe-d337-48a3-a1ec-568b83802765\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657558 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-config-data\") pod \"f7073226-245a-41db-80c3-f30102363ae1\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657584 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1f11c62-caea-4b02-9a66-6c385a3b93c0-horizon-secret-key\") pod \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657609 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zppmh\" (UniqueName: \"kubernetes.io/projected/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-kube-api-access-zppmh\") pod \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657631 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-combined-ca-bundle\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657657 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-config-data\") pod \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657689 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-httpd-run\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657711 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-config-data\") pod \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657730 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7073226-245a-41db-80c3-f30102363ae1-logs\") pod \"f7073226-245a-41db-80c3-f30102363ae1\" (UID: \"f7073226-245a-41db-80c3-f30102363ae1\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657764 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-logs\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657784 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-horizon-secret-key\") pod \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657808 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-combined-ca-bundle\") pod \"92996c14-829b-4668-b74f-42e672f1b9b3\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657833 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-custom-prometheus-ca\") pod \"9af43afe-d337-48a3-a1ec-568b83802765\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657854 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-config-data\") pod \"9af43afe-d337-48a3-a1ec-568b83802765\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657878 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f11c62-caea-4b02-9a66-6c385a3b93c0-logs\") pod \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657900 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-config\") pod \"92996c14-829b-4668-b74f-42e672f1b9b3\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657920 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrz4j\" (UniqueName: \"kubernetes.io/projected/c1f11c62-caea-4b02-9a66-6c385a3b93c0-kube-api-access-jrz4j\") pod \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657944 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26tg7\" (UniqueName: \"kubernetes.io/projected/9af43afe-d337-48a3-a1ec-568b83802765-kube-api-access-26tg7\") pod \"9af43afe-d337-48a3-a1ec-568b83802765\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.657977 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-combined-ca-bundle\") pod \"9af43afe-d337-48a3-a1ec-568b83802765\" (UID: \"9af43afe-d337-48a3-a1ec-568b83802765\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.658001 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptg4m\" (UniqueName: \"kubernetes.io/projected/67179f66-3806-4c95-b46a-858e6ad7575b-kube-api-access-ptg4m\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.658086 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-scripts\") pod \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.660815 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.660866 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-config-data\") pod \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.660892 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpqd9\" (UniqueName: \"kubernetes.io/projected/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-kube-api-access-dpqd9\") pod \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.660956 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-logs\") pod \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\" (UID: \"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.660987 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-scripts\") pod \"67179f66-3806-4c95-b46a-858e6ad7575b\" (UID: \"67179f66-3806-4c95-b46a-858e6ad7575b\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.661049 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-scripts\") pod \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\" (UID: \"c1f11c62-caea-4b02-9a66-6c385a3b93c0\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.661088 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xmpz\" (UniqueName: \"kubernetes.io/projected/92996c14-829b-4668-b74f-42e672f1b9b3-kube-api-access-9xmpz\") pod \"92996c14-829b-4668-b74f-42e672f1b9b3\" (UID: \"92996c14-829b-4668-b74f-42e672f1b9b3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.661114 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-logs\") pod \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\" (UID: \"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.663061 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-logs" (OuterVolumeSpecName: "logs") pod "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3" (UID: "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.666517 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-scripts" (OuterVolumeSpecName: "scripts") pod "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3" (UID: "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.666948 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-config-data" (OuterVolumeSpecName: "config-data") pod "c1f11c62-caea-4b02-9a66-6c385a3b93c0" (UID: "c1f11c62-caea-4b02-9a66-6c385a3b93c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.667879 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af43afe-d337-48a3-a1ec-568b83802765-logs" (OuterVolumeSpecName: "logs") pod "9af43afe-d337-48a3-a1ec-568b83802765" (UID: "9af43afe-d337-48a3-a1ec-568b83802765"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.676154 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6" (UID: "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.676443 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.677341 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-config-data" (OuterVolumeSpecName: "config-data") pod "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6" (UID: "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.678185 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.678501 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-scripts" (OuterVolumeSpecName: "scripts") pod "c1f11c62-caea-4b02-9a66-6c385a3b93c0" (UID: "c1f11c62-caea-4b02-9a66-6c385a3b93c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.678954 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-logs" (OuterVolumeSpecName: "logs") pod "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6" (UID: "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.679830 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-scripts" (OuterVolumeSpecName: "scripts") pod "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6" (UID: "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.680350 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-config-data" (OuterVolumeSpecName: "config-data") pod "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3" (UID: "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.681098 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-logs" (OuterVolumeSpecName: "logs") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.681812 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f11c62-caea-4b02-9a66-6c385a3b93c0-logs" (OuterVolumeSpecName: "logs") pod "c1f11c62-caea-4b02-9a66-6c385a3b93c0" (UID: "c1f11c62-caea-4b02-9a66-6c385a3b93c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.692403 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7073226-245a-41db-80c3-f30102363ae1-logs" (OuterVolumeSpecName: "logs") pod "f7073226-245a-41db-80c3-f30102363ae1" (UID: "f7073226-245a-41db-80c3-f30102363ae1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.693022 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-kube-api-access-zppmh" (OuterVolumeSpecName: "kube-api-access-zppmh") pod "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3" (UID: "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3"). InnerVolumeSpecName "kube-api-access-zppmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.695665 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f11c62-caea-4b02-9a66-6c385a3b93c0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1f11c62-caea-4b02-9a66-6c385a3b93c0" (UID: "c1f11c62-caea-4b02-9a66-6c385a3b93c0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.697021 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67179f66-3806-4c95-b46a-858e6ad7575b-kube-api-access-ptg4m" (OuterVolumeSpecName: "kube-api-access-ptg4m") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "kube-api-access-ptg4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.698373 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-kube-api-access-dpqd9" (OuterVolumeSpecName: "kube-api-access-dpqd9") pod "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6" (UID: "583d56e4-c8bb-4f8e-9d6c-8623c078a1b6"). InnerVolumeSpecName "kube-api-access-dpqd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.699248 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.704753 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2k8wb"] Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.730046 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3" (UID: "f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.730119 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92996c14-829b-4668-b74f-42e672f1b9b3-kube-api-access-9xmpz" (OuterVolumeSpecName: "kube-api-access-9xmpz") pod "92996c14-829b-4668-b74f-42e672f1b9b3" (UID: "92996c14-829b-4668-b74f-42e672f1b9b3"). InnerVolumeSpecName "kube-api-access-9xmpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.730737 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-scripts" (OuterVolumeSpecName: "scripts") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.734532 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7073226-245a-41db-80c3-f30102363ae1-kube-api-access-fttlv" (OuterVolumeSpecName: "kube-api-access-fttlv") pod "f7073226-245a-41db-80c3-f30102363ae1" (UID: "f7073226-245a-41db-80c3-f30102363ae1"). InnerVolumeSpecName "kube-api-access-fttlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.738827 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af43afe-d337-48a3-a1ec-568b83802765-kube-api-access-26tg7" (OuterVolumeSpecName: "kube-api-access-26tg7") pod "9af43afe-d337-48a3-a1ec-568b83802765" (UID: "9af43afe-d337-48a3-a1ec-568b83802765"). InnerVolumeSpecName "kube-api-access-26tg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.738849 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f11c62-caea-4b02-9a66-6c385a3b93c0-kube-api-access-jrz4j" (OuterVolumeSpecName: "kube-api-access-jrz4j") pod "c1f11c62-caea-4b02-9a66-6c385a3b93c0" (UID: "c1f11c62-caea-4b02-9a66-6c385a3b93c0"). InnerVolumeSpecName "kube-api-access-jrz4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.765331 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-custom-prometheus-ca\") pod \"cd362fd6-aa93-46af-b11d-042876cf1554\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.765400 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwfj9\" (UniqueName: \"kubernetes.io/projected/cd362fd6-aa93-46af-b11d-042876cf1554-kube-api-access-vwfj9\") pod \"cd362fd6-aa93-46af-b11d-042876cf1554\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.765490 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd362fd6-aa93-46af-b11d-042876cf1554-logs\") pod \"cd362fd6-aa93-46af-b11d-042876cf1554\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.765547 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-config-data\") pod \"cd362fd6-aa93-46af-b11d-042876cf1554\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.765660 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-combined-ca-bundle\") pod \"cd362fd6-aa93-46af-b11d-042876cf1554\" (UID: \"cd362fd6-aa93-46af-b11d-042876cf1554\") " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.765981 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.765994 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fttlv\" (UniqueName: \"kubernetes.io/projected/f7073226-245a-41db-80c3-f30102363ae1-kube-api-access-fttlv\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766007 4767 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766017 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766028 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43afe-d337-48a3-a1ec-568b83802765-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766040 4767 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1f11c62-caea-4b02-9a66-6c385a3b93c0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766051 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zppmh\" (UniqueName: \"kubernetes.io/projected/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-kube-api-access-zppmh\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766060 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766068 4767 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766077 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766086 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7073226-245a-41db-80c3-f30102363ae1-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766094 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67179f66-3806-4c95-b46a-858e6ad7575b-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766102 4767 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766110 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f11c62-caea-4b02-9a66-6c385a3b93c0-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766118 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrz4j\" (UniqueName: \"kubernetes.io/projected/c1f11c62-caea-4b02-9a66-6c385a3b93c0-kube-api-access-jrz4j\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766126 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26tg7\" (UniqueName: \"kubernetes.io/projected/9af43afe-d337-48a3-a1ec-568b83802765-kube-api-access-26tg7\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766135 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptg4m\" (UniqueName: \"kubernetes.io/projected/67179f66-3806-4c95-b46a-858e6ad7575b-kube-api-access-ptg4m\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766143 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766160 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766173 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766185 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpqd9\" (UniqueName: \"kubernetes.io/projected/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-kube-api-access-dpqd9\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766196 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766205 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766213 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1f11c62-caea-4b02-9a66-6c385a3b93c0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.766222 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xmpz\" (UniqueName: \"kubernetes.io/projected/92996c14-829b-4668-b74f-42e672f1b9b3-kube-api-access-9xmpz\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.767644 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd362fd6-aa93-46af-b11d-042876cf1554-logs" (OuterVolumeSpecName: "logs") pod "cd362fd6-aa93-46af-b11d-042876cf1554" (UID: "cd362fd6-aa93-46af-b11d-042876cf1554"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.768555 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d69c9d5c6-qr8nq"] Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.791510 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd362fd6-aa93-46af-b11d-042876cf1554-kube-api-access-vwfj9" (OuterVolumeSpecName: "kube-api-access-vwfj9") pod "cd362fd6-aa93-46af-b11d-042876cf1554" (UID: "cd362fd6-aa93-46af-b11d-042876cf1554"). InnerVolumeSpecName "kube-api-access-vwfj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.826886 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-config" (OuterVolumeSpecName: "config") pod "92996c14-829b-4668-b74f-42e672f1b9b3" (UID: "92996c14-829b-4668-b74f-42e672f1b9b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.834606 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92996c14-829b-4668-b74f-42e672f1b9b3" (UID: "92996c14-829b-4668-b74f-42e672f1b9b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.842801 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.849669 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9af43afe-d337-48a3-a1ec-568b83802765" (UID: "9af43afe-d337-48a3-a1ec-568b83802765"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.850098 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.851296 4767 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.860885 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-config-data" (OuterVolumeSpecName: "config-data") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.866116 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-567c96d68-4rmbm"] Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868365 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868403 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868416 4767 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868430 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/92996c14-829b-4668-b74f-42e672f1b9b3-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868442 4767 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868453 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwfj9\" (UniqueName: \"kubernetes.io/projected/cd362fd6-aa93-46af-b11d-042876cf1554-kube-api-access-vwfj9\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868464 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd362fd6-aa93-46af-b11d-042876cf1554-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.868474 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.873634 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7073226-245a-41db-80c3-f30102363ae1" (UID: "f7073226-245a-41db-80c3-f30102363ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.883747 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd362fd6-aa93-46af-b11d-042876cf1554" (UID: "cd362fd6-aa93-46af-b11d-042876cf1554"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.885160 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cd362fd6-aa93-46af-b11d-042876cf1554" (UID: "cd362fd6-aa93-46af-b11d-042876cf1554"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.888998 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-config-data" (OuterVolumeSpecName: "config-data") pod "cd362fd6-aa93-46af-b11d-042876cf1554" (UID: "cd362fd6-aa93-46af-b11d-042876cf1554"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.891010 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9af43afe-d337-48a3-a1ec-568b83802765" (UID: "9af43afe-d337-48a3-a1ec-568b83802765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.909830 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "67179f66-3806-4c95-b46a-858e6ad7575b" (UID: "67179f66-3806-4c95-b46a-858e6ad7575b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.910855 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-config-data" (OuterVolumeSpecName: "config-data") pod "9af43afe-d337-48a3-a1ec-568b83802765" (UID: "9af43afe-d337-48a3-a1ec-568b83802765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.911039 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-config-data" (OuterVolumeSpecName: "config-data") pod "f7073226-245a-41db-80c3-f30102363ae1" (UID: "f7073226-245a-41db-80c3-f30102363ae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970553 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970608 4767 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970626 4767 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67179f66-3806-4c95-b46a-858e6ad7575b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970638 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970670 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7073226-245a-41db-80c3-f30102363ae1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970681 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970691 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43afe-d337-48a3-a1ec-568b83802765-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:11 crc kubenswrapper[4767]: I1124 21:56:11.970704 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd362fd6-aa93-46af-b11d-042876cf1554-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:12 crc kubenswrapper[4767]: W1124 21:56:12.065488 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54aafebf_445c_4632_81c3_1f35b84a4ef7.slice/crio-0f37ac4f1a685b4636ec421e22c8790819a512c301be03ee3feb5f89c9ed784d WatchSource:0}: Error finding container 0f37ac4f1a685b4636ec421e22c8790819a512c301be03ee3feb5f89c9ed784d: Status 404 returned error can't find the container with id 0f37ac4f1a685b4636ec421e22c8790819a512c301be03ee3feb5f89c9ed784d Nov 24 21:56:12 crc kubenswrapper[4767]: W1124 21:56:12.076094 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a751ba_fb23_4cd3_a1f7_2c843e04ab47.slice/crio-d600ec54742a29aad6a508642db597ceb350549e25d633c8ec3eca050d20e932 WatchSource:0}: Error finding container d600ec54742a29aad6a508642db597ceb350549e25d633c8ec3eca050d20e932: Status 404 returned error can't find the container with id d600ec54742a29aad6a508642db597ceb350549e25d633c8ec3eca050d20e932 Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.084230 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bfbc56cc-98l48" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.084228 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bfbc56cc-98l48" event={"ID":"c1f11c62-caea-4b02-9a66-6c385a3b93c0","Type":"ContainerDied","Data":"eb1ee7ee3d875120c9a3b3584a4986edd443bc1bbeb99e87496d09b66088956f"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.086612 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9fp5" event={"ID":"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703","Type":"ContainerStarted","Data":"88543f88cdf848cca677fbf0f060eaf50179873c8d4a13f37c36e487327e2ea8"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.089232 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb2678c-0ca9-48c9-952d-a0933f8dc512","Type":"ContainerStarted","Data":"a6b51f864e27e5f3001385863f9a37d5d1d1acc2078c09374908a0d71726f1fb"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.090420 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lc8sg" event={"ID":"92996c14-829b-4668-b74f-42e672f1b9b3","Type":"ContainerDied","Data":"98faa3c52197d9a96bfaa9d89a613aabd4a908393eed7e93d24ffc558ca7986a"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.090451 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98faa3c52197d9a96bfaa9d89a613aabd4a908393eed7e93d24ffc558ca7986a" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.090448 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lc8sg" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.092731 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77896db6b9-8mlpx" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.092739 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77896db6b9-8mlpx" event={"ID":"f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3","Type":"ContainerDied","Data":"c6ae6d294f2cec44e1c7bdc93f166d87036f86502112f83b5b3a0b1b9cd1f566"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.095090 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.095133 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67179f66-3806-4c95-b46a-858e6ad7575b","Type":"ContainerDied","Data":"b1c2a234a2ca216a8d7bb9b21957cce51c12444cdef75e9f922aa0681d438efa"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.095179 4767 scope.go:117] "RemoveContainer" containerID="46d7f4226950d813eca43d440e5e2c7b85f580e40e191d317d23551926d62ccf" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.103388 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hd5nf" event={"ID":"83eba727-cd44-4013-8ce3-5672f4f7f595","Type":"ContainerStarted","Data":"0c03e0e66a3599ba2b540ceb043b24be074b360e6c0b32d2722f8a0986479037"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.105587 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-r9fp5" podStartSLOduration=3.765080315 podStartE2EDuration="31.105571326s" podCreationTimestamp="2025-11-24 21:55:41 +0000 UTC" firstStartedPulling="2025-11-24 21:55:43.829049818 +0000 UTC m=+1026.746033190" lastFinishedPulling="2025-11-24 21:56:11.169540829 +0000 UTC m=+1054.086524201" observedRunningTime="2025-11-24 21:56:12.101646064 +0000 UTC m=+1055.018629456" watchObservedRunningTime="2025-11-24 21:56:12.105571326 +0000 UTC m=+1055.022554708" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.108302 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f7073226-245a-41db-80c3-f30102363ae1","Type":"ContainerDied","Data":"172e5cdacce2cc16e3efee62c81c2839605064f6a6e16dce4dcb058f4ea995e4"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.108414 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.116487 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fcf7cc567-vhj2w" event={"ID":"583d56e4-c8bb-4f8e-9d6c-8623c078a1b6","Type":"ContainerDied","Data":"23e86a55b3deca8b3396235ccb86e7e67d10f5e4cfcdff236e377059e8eea318"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.116513 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fcf7cc567-vhj2w" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.117974 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2k8wb" event={"ID":"54aafebf-445c-4632-81c3-1f35b84a4ef7","Type":"ContainerStarted","Data":"0f37ac4f1a685b4636ec421e22c8790819a512c301be03ee3feb5f89c9ed784d"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.124862 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hd5nf" podStartSLOduration=5.253042481 podStartE2EDuration="31.124838493s" podCreationTimestamp="2025-11-24 21:55:41 +0000 UTC" firstStartedPulling="2025-11-24 21:55:43.841537743 +0000 UTC m=+1026.758521115" lastFinishedPulling="2025-11-24 21:56:09.713333745 +0000 UTC m=+1052.630317127" observedRunningTime="2025-11-24 21:56:12.122483936 +0000 UTC m=+1055.039467308" watchObservedRunningTime="2025-11-24 21:56:12.124838493 +0000 UTC m=+1055.041821865" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.125786 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.126219 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9af43afe-d337-48a3-a1ec-568b83802765","Type":"ContainerDied","Data":"d484e8619bb36dd3aca1e056c18184a6e0ceb6f334c110f87715c8e610488876"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.130052 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d69c9d5c6-qr8nq" event={"ID":"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1","Type":"ContainerStarted","Data":"e1f8ef7cdd40d10ca6d1d25295054d5b80f7439ecdeee23fb84d80be97e10390"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.131934 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.132059 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd362fd6-aa93-46af-b11d-042876cf1554","Type":"ContainerDied","Data":"021df0605bbf28ae221b96d08a3a18606fb47e0b54f9a344eda2c20fe416b33b"} Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.141153 4767 scope.go:117] "RemoveContainer" containerID="261c1847bbee7e49c289aeb51d9411c250fdd6939fdea3cd1a37f8988a8d7575" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.141466 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tzcqj" podUID="128eda36-f009-47c2-8939-73ec23da0d4c" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.177027 4767 scope.go:117] "RemoveContainer" containerID="af09a2181d83bed7ba74641f3ada80130269beaaa6a22acc0a480ee443bb4094" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.361717 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.362198 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.371104 4767 scope.go:117] "RemoveContainer" containerID="cfdf82a508d9116de96afd602ec8e8eb0e4e52fec42991df41fb5de1ad453088" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.394437 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.394942 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerName="init" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.394965 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerName="init" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.394986 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api-log" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.394995 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api-log" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.395007 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395014 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.395026 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92996c14-829b-4668-b74f-42e672f1b9b3" containerName="neutron-db-sync" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395103 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="92996c14-829b-4668-b74f-42e672f1b9b3" containerName="neutron-db-sync" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.395128 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd362fd6-aa93-46af-b11d-042876cf1554" containerName="watcher-decision-engine" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395136 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd362fd6-aa93-46af-b11d-042876cf1554" containerName="watcher-decision-engine" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.395150 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-httpd" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395182 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-httpd" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.395198 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-log" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395206 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-log" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.395221 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395229 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" Nov 24 21:56:12 crc kubenswrapper[4767]: E1124 21:56:12.395249 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerName="dnsmasq-dns" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395258 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerName="dnsmasq-dns" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395472 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd362fd6-aa93-46af-b11d-042876cf1554" containerName="watcher-decision-engine" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395493 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-httpd" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395510 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7073226-245a-41db-80c3-f30102363ae1" containerName="watcher-applier" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395526 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395537 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" containerName="glance-log" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395546 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api-log" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395560 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="92996c14-829b-4668-b74f-42e672f1b9b3" containerName="neutron-db-sync" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.395581 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97d9980-2ced-4225-b125-cfffc3f605c9" containerName="dnsmasq-dns" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.396833 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.400978 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.401024 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.403741 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.491025 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77896db6b9-8mlpx"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.493190 4767 scope.go:117] "RemoveContainer" containerID="c45b38293ca7e57ff55daeff944b8407c979da4ebe2ded94711e7ad85868ea38" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.505063 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77896db6b9-8mlpx"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.531330 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bfbc56cc-98l48"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.536705 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bfbc56cc-98l48"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.588641 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.588689 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.588731 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.588764 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.588779 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-logs\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.589434 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.589458 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6cx\" (UniqueName: \"kubernetes.io/projected/8f680b41-c2c3-4795-98df-05e64ad8ed95-kube-api-access-5w6cx\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.589475 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.614479 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.632930 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.664938 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.681540 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691034 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691120 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691174 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691191 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-logs\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691276 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691294 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6cx\" (UniqueName: \"kubernetes.io/projected/8f680b41-c2c3-4795-98df-05e64ad8ed95-kube-api-access-5w6cx\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691310 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.691364 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.694438 4767 scope.go:117] "RemoveContainer" containerID="a764a74bb13ed23a0f6634101933e913ee7272692c2614118b5b97a88ad77118" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.695235 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-logs\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.704158 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.705772 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.710152 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.710533 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.711381 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.715948 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.718007 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-mrbvr" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.718409 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.725334 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.729057 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.736927 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6cx\" (UniqueName: \"kubernetes.io/projected/8f680b41-c2c3-4795-98df-05e64ad8ed95-kube-api-access-5w6cx\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.783778 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.793367 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f5a67d-feb4-402c-ac35-fc17aca926c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.793428 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.793462 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.793533 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbwfq\" (UniqueName: \"kubernetes.io/projected/31f5a67d-feb4-402c-ac35-fc17aca926c5-kube-api-access-qbwfq\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.793554 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.796812 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.798257 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.810245 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.814055 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.880577 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894725 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbwfq\" (UniqueName: \"kubernetes.io/projected/31f5a67d-feb4-402c-ac35-fc17aca926c5-kube-api-access-qbwfq\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894766 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894834 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-config-data\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894859 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-logs\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894881 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894899 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t94wj\" (UniqueName: \"kubernetes.io/projected/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-kube-api-access-t94wj\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894929 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f5a67d-feb4-402c-ac35-fc17aca926c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894948 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894968 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.894986 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.897580 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fcf7cc567-vhj2w"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.899702 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f5a67d-feb4-402c-ac35-fc17aca926c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.908523 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.911382 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fcf7cc567-vhj2w"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.912518 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.917033 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbwfq\" (UniqueName: \"kubernetes.io/projected/31f5a67d-feb4-402c-ac35-fc17aca926c5-kube-api-access-qbwfq\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.921398 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.922452 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.923962 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.943510 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.960757 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.962120 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 21:56:12 crc kubenswrapper[4767]: I1124 21:56:12.966589 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.001960 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.002235 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-config-data\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.002346 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-logs\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.002416 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.002478 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t94wj\" (UniqueName: \"kubernetes.io/projected/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-kube-api-access-t94wj\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.006048 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-logs\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.007821 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.009156 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.009945 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.024286 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-config-data\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.045361 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t94wj\" (UniqueName: \"kubernetes.io/projected/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-kube-api-access-t94wj\") pod \"watcher-api-0\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.063341 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vkcs7"] Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.064923 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.074570 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vkcs7"] Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.103652 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63be4b34-e65f-4045-8223-6f19324c761b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.103944 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63be4b34-e65f-4045-8223-6f19324c761b-config-data\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.103994 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqpkj\" (UniqueName: \"kubernetes.io/projected/63be4b34-e65f-4045-8223-6f19324c761b-kube-api-access-zqpkj\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.104027 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63be4b34-e65f-4045-8223-6f19324c761b-logs\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.109346 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b77df9bd4-5cckf"] Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.110773 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.113771 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.114132 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fz5t8" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.120776 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.120956 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.134513 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b77df9bd4-5cckf"] Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.201329 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.202808 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-567c96d68-4rmbm" event={"ID":"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47","Type":"ContainerStarted","Data":"d600ec54742a29aad6a508642db597ceb350549e25d633c8ec3eca050d20e932"} Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205038 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8pzp\" (UniqueName: \"kubernetes.io/projected/d949a6f4-9d83-42c5-b4df-e79178848c5f-kube-api-access-k8pzp\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205082 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63be4b34-e65f-4045-8223-6f19324c761b-logs\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205113 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-ovndb-tls-certs\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205143 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205160 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-combined-ca-bundle\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205179 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-httpd-config\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205206 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-config\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205223 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205243 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63be4b34-e65f-4045-8223-6f19324c761b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205261 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-config\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205316 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205342 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63be4b34-e65f-4045-8223-6f19324c761b-config-data\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205370 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzr5f\" (UniqueName: \"kubernetes.io/projected/72d913d0-e2e2-4c49-9775-e16826ebcb2e-kube-api-access-rzr5f\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205397 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.205418 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqpkj\" (UniqueName: \"kubernetes.io/projected/63be4b34-e65f-4045-8223-6f19324c761b-kube-api-access-zqpkj\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.206110 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63be4b34-e65f-4045-8223-6f19324c761b-logs\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.210959 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63be4b34-e65f-4045-8223-6f19324c761b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.224214 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.228824 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63be4b34-e65f-4045-8223-6f19324c761b-config-data\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.230786 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqpkj\" (UniqueName: \"kubernetes.io/projected/63be4b34-e65f-4045-8223-6f19324c761b-kube-api-access-zqpkj\") pod \"watcher-applier-0\" (UID: \"63be4b34-e65f-4045-8223-6f19324c761b\") " pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.252616 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerStarted","Data":"fb6fbe7605b29465a47f88ba3630d4e3a7ea3d9849c6e90239ccaf7407709025"} Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.272257 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2k8wb" event={"ID":"54aafebf-445c-4632-81c3-1f35b84a4ef7","Type":"ContainerStarted","Data":"f2b8544d895d08acb115bbcb716dc8d72b95ad8f72cf4551f1c82a0cd888ac92"} Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.309896 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-httpd-config\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.309947 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-config\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.309970 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.309996 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-config\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.310029 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.310070 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzr5f\" (UniqueName: \"kubernetes.io/projected/72d913d0-e2e2-4c49-9775-e16826ebcb2e-kube-api-access-rzr5f\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.310098 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.310124 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8pzp\" (UniqueName: \"kubernetes.io/projected/d949a6f4-9d83-42c5-b4df-e79178848c5f-kube-api-access-k8pzp\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.310163 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-ovndb-tls-certs\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.310190 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.310209 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-combined-ca-bundle\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.312064 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.313865 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.314064 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.314328 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.314645 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.315168 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-config\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.327256 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-combined-ca-bundle\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.328108 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-httpd-config\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.328250 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-ovndb-tls-certs\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.337485 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2k8wb" podStartSLOduration=25.323355551 podStartE2EDuration="25.323355551s" podCreationTimestamp="2025-11-24 21:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:13.308681975 +0000 UTC m=+1056.225665347" watchObservedRunningTime="2025-11-24 21:56:13.323355551 +0000 UTC m=+1056.240338923" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.342228 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzr5f\" (UniqueName: \"kubernetes.io/projected/72d913d0-e2e2-4c49-9775-e16826ebcb2e-kube-api-access-rzr5f\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.369928 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8pzp\" (UniqueName: \"kubernetes.io/projected/d949a6f4-9d83-42c5-b4df-e79178848c5f-kube-api-access-k8pzp\") pod \"dnsmasq-dns-55f844cf75-vkcs7\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.369959 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-config\") pod \"neutron-b77df9bd4-5cckf\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.395468 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.436746 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.843013 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:56:13 crc kubenswrapper[4767]: W1124 21:56:13.845039 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f680b41_c2c3_4795_98df_05e64ad8ed95.slice/crio-f17edf1843de8dfdf689ce0190c45429621af795fb6c53ffcceae8b196dee589 WatchSource:0}: Error finding container f17edf1843de8dfdf689ce0190c45429621af795fb6c53ffcceae8b196dee589: Status 404 returned error can't find the container with id f17edf1843de8dfdf689ce0190c45429621af795fb6c53ffcceae8b196dee589 Nov 24 21:56:13 crc kubenswrapper[4767]: I1124 21:56:13.989801 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.008142 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:14 crc kubenswrapper[4767]: W1124 21:56:14.009880 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b1b27c4_98d7_4f42_8c86_3ad108b0bcfb.slice/crio-4a2baf73deceda8af17b2640d66cc6019a750195a2adcc4f1f3be06631c44085 WatchSource:0}: Error finding container 4a2baf73deceda8af17b2640d66cc6019a750195a2adcc4f1f3be06631c44085: Status 404 returned error can't find the container with id 4a2baf73deceda8af17b2640d66cc6019a750195a2adcc4f1f3be06631c44085 Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.168070 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.281619 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b77df9bd4-5cckf"] Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.299926 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vkcs7"] Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.352586 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583d56e4-c8bb-4f8e-9d6c-8623c078a1b6" path="/var/lib/kubelet/pods/583d56e4-c8bb-4f8e-9d6c-8623c078a1b6/volumes" Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.354057 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67179f66-3806-4c95-b46a-858e6ad7575b" path="/var/lib/kubelet/pods/67179f66-3806-4c95-b46a-858e6ad7575b/volumes" Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.354857 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af43afe-d337-48a3-a1ec-568b83802765" path="/var/lib/kubelet/pods/9af43afe-d337-48a3-a1ec-568b83802765/volumes" Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.356338 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f11c62-caea-4b02-9a66-6c385a3b93c0" path="/var/lib/kubelet/pods/c1f11c62-caea-4b02-9a66-6c385a3b93c0/volumes" Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.356797 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd362fd6-aa93-46af-b11d-042876cf1554" path="/var/lib/kubelet/pods/cd362fd6-aa93-46af-b11d-042876cf1554/volumes" Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.357432 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7073226-245a-41db-80c3-f30102363ae1" path="/var/lib/kubelet/pods/f7073226-245a-41db-80c3-f30102363ae1/volumes" Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.358554 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3" path="/var/lib/kubelet/pods/f9ccb73f-2a88-4ab1-a1a6-f84dd30a19c3/volumes" Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.358999 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-567c96d68-4rmbm" event={"ID":"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47","Type":"ContainerStarted","Data":"cf910e7ca13754ac6a7829027261303f6f67643ed4644435a1b97f486e2d0669"} Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.359034 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31f5a67d-feb4-402c-ac35-fc17aca926c5","Type":"ContainerStarted","Data":"aa90c75b4150a78587a2d5f8e544b9849198f6ef3bbd1956035bd561f423a398"} Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.359049 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb","Type":"ContainerStarted","Data":"4a2baf73deceda8af17b2640d66cc6019a750195a2adcc4f1f3be06631c44085"} Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.359060 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb2678c-0ca9-48c9-952d-a0933f8dc512","Type":"ContainerStarted","Data":"ef927980fb95532eb4c2650511d030e413a4d2c9ec2d82a381b8650e740c2a20"} Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.359070 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"63be4b34-e65f-4045-8223-6f19324c761b","Type":"ContainerStarted","Data":"467d1f6507f937142d76b992215c51bad1b8bbd7ccfb2b81b35d11eb10bcd102"} Nov 24 21:56:14 crc kubenswrapper[4767]: I1124 21:56:14.359082 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f680b41-c2c3-4795-98df-05e64ad8ed95","Type":"ContainerStarted","Data":"f17edf1843de8dfdf689ce0190c45429621af795fb6c53ffcceae8b196dee589"} Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.364717 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b77df9bd4-5cckf" event={"ID":"72d913d0-e2e2-4c49-9775-e16826ebcb2e","Type":"ContainerStarted","Data":"802dbcfe8071aebabed0a1fefca9f1393263eebb427ed96c7f1680b8b2c320bc"} Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.367046 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" event={"ID":"d949a6f4-9d83-42c5-b4df-e79178848c5f","Type":"ContainerStarted","Data":"bf5164fd3114ffe24fa0abff2563522802366ecee98342f12f3ccef4c1d8c8dc"} Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.572939 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78c4646f4f-mnjlq"] Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.576892 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.579677 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.579930 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.596722 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78c4646f4f-mnjlq"] Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.674523 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvh8k\" (UniqueName: \"kubernetes.io/projected/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-kube-api-access-dvh8k\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.674899 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-combined-ca-bundle\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.674968 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-public-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.675001 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-httpd-config\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.675041 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-ovndb-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.675161 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-config\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.675189 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-internal-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.776433 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-combined-ca-bundle\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.776472 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-public-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.776488 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-httpd-config\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.776528 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-ovndb-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.776569 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-config\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.776585 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-internal-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.776632 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvh8k\" (UniqueName: \"kubernetes.io/projected/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-kube-api-access-dvh8k\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.783403 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-internal-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.783871 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-ovndb-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.785681 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-config\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.785755 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-combined-ca-bundle\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.786714 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-httpd-config\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.790199 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-public-tls-certs\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.793942 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvh8k\" (UniqueName: \"kubernetes.io/projected/13d6c00a-8e06-47a6-b1c7-f32681fd7ddd-kube-api-access-dvh8k\") pod \"neutron-78c4646f4f-mnjlq\" (UID: \"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd\") " pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:15 crc kubenswrapper[4767]: I1124 21:56:15.945670 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.212429 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9af43afe-d337-48a3-a1ec-568b83802765" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.389995 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b77df9bd4-5cckf" event={"ID":"72d913d0-e2e2-4c49-9775-e16826ebcb2e","Type":"ContainerStarted","Data":"c44ade794211c80693ef9ffb3fa8abc892c908856529cc35ab5d29e360efefec"} Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.417720 4767 generic.go:334] "Generic (PLEG): container finished" podID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerID="fdc7a34cf2233f9a0801c8a6ce3130d6fa50650cf051f8a63ac050fd4730a94d" exitCode=0 Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.417788 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" event={"ID":"d949a6f4-9d83-42c5-b4df-e79178848c5f","Type":"ContainerDied","Data":"fdc7a34cf2233f9a0801c8a6ce3130d6fa50650cf051f8a63ac050fd4730a94d"} Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.425602 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d69c9d5c6-qr8nq" event={"ID":"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1","Type":"ContainerStarted","Data":"f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad"} Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.431065 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31f5a67d-feb4-402c-ac35-fc17aca926c5","Type":"ContainerStarted","Data":"105ba1b3202a9e826b4462af96a973b2dc271b8f111e949256b3083a867bab1d"} Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.436812 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb","Type":"ContainerStarted","Data":"eed7c5396bda53207c76f488a9d06177b3be055e486a84e1090102f0579a4531"} Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.462159 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"63be4b34-e65f-4045-8223-6f19324c761b","Type":"ContainerStarted","Data":"4a07eb676272da194ca2701ee5634170343d36c98314f53bc30a599d7572dbe7"} Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.462208 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=4.462192579 podStartE2EDuration="4.462192579s" podCreationTimestamp="2025-11-24 21:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:16.461186841 +0000 UTC m=+1059.378170213" watchObservedRunningTime="2025-11-24 21:56:16.462192579 +0000 UTC m=+1059.379175951" Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.469339 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f680b41-c2c3-4795-98df-05e64ad8ed95","Type":"ContainerStarted","Data":"f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497"} Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.491083 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.491058809 podStartE2EDuration="4.491058809s" podCreationTimestamp="2025-11-24 21:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:16.484908004 +0000 UTC m=+1059.401891376" watchObservedRunningTime="2025-11-24 21:56:16.491058809 +0000 UTC m=+1059.408042181" Nov 24 21:56:16 crc kubenswrapper[4767]: I1124 21:56:16.611844 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78c4646f4f-mnjlq"] Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.480007 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d69c9d5c6-qr8nq" event={"ID":"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1","Type":"ContainerStarted","Data":"03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.482192 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb","Type":"ContainerStarted","Data":"b94edabda1034bf0bd7f22ca6890896c09d5b2b7477fb6b5ea59044cde213744"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.482409 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.485729 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb2678c-0ca9-48c9-952d-a0933f8dc512","Type":"ContainerStarted","Data":"5523fcd67b177ea805dd42f7eed90657fe3708537915be8cd080dcb7b145e833"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.488073 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b77df9bd4-5cckf" event={"ID":"72d913d0-e2e2-4c49-9775-e16826ebcb2e","Type":"ContainerStarted","Data":"0bbbb7007afab2707a502a42f9b0af7b254e8ef485eeacb17d1f6f3f86a3b416"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.488162 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.489744 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c4646f4f-mnjlq" event={"ID":"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd","Type":"ContainerStarted","Data":"2a315a285659b421074fb232c0e4a1686c277a284093755edcfd544017edd353"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.489773 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c4646f4f-mnjlq" event={"ID":"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd","Type":"ContainerStarted","Data":"c0c61dbdeaf285d1095239e9d86713be89351a218d4bd03fe47387b2ec541122"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.491649 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" event={"ID":"d949a6f4-9d83-42c5-b4df-e79178848c5f","Type":"ContainerStarted","Data":"b7e62785ab79523ea265bb9f0ab2bb098e95a50d92a88879ac58a4b1c3bb9433"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.491788 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.494175 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-567c96d68-4rmbm" event={"ID":"f3a751ba-fb23-4cd3-a1f7-2c843e04ab47","Type":"ContainerStarted","Data":"2c56127284f980075bf49c5e1bf116a58cec913d0987965313ca892aef26777a"} Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.500023 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d69c9d5c6-qr8nq" podStartSLOduration=26.880754023 podStartE2EDuration="27.500005575s" podCreationTimestamp="2025-11-24 21:55:50 +0000 UTC" firstStartedPulling="2025-11-24 21:56:12.107470489 +0000 UTC m=+1055.024453861" lastFinishedPulling="2025-11-24 21:56:12.726722041 +0000 UTC m=+1055.643705413" observedRunningTime="2025-11-24 21:56:17.496178986 +0000 UTC m=+1060.413162368" watchObservedRunningTime="2025-11-24 21:56:17.500005575 +0000 UTC m=+1060.416988947" Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.523554 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" podStartSLOduration=5.523534833 podStartE2EDuration="5.523534833s" podCreationTimestamp="2025-11-24 21:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:17.51461596 +0000 UTC m=+1060.431599332" watchObservedRunningTime="2025-11-24 21:56:17.523534833 +0000 UTC m=+1060.440518205" Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.549032 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.549015457 podStartE2EDuration="5.549015457s" podCreationTimestamp="2025-11-24 21:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:17.538518949 +0000 UTC m=+1060.455502331" watchObservedRunningTime="2025-11-24 21:56:17.549015457 +0000 UTC m=+1060.465998829" Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.570768 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-567c96d68-4rmbm" podStartSLOduration=26.969836793 podStartE2EDuration="27.570749054s" podCreationTimestamp="2025-11-24 21:55:50 +0000 UTC" firstStartedPulling="2025-11-24 21:56:12.125560963 +0000 UTC m=+1055.042544335" lastFinishedPulling="2025-11-24 21:56:12.726473224 +0000 UTC m=+1055.643456596" observedRunningTime="2025-11-24 21:56:17.561668616 +0000 UTC m=+1060.478651988" watchObservedRunningTime="2025-11-24 21:56:17.570749054 +0000 UTC m=+1060.487732426" Nov 24 21:56:17 crc kubenswrapper[4767]: I1124 21:56:17.592327 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b77df9bd4-5cckf" podStartSLOduration=5.592306296 podStartE2EDuration="5.592306296s" podCreationTimestamp="2025-11-24 21:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:17.588497958 +0000 UTC m=+1060.505481330" watchObservedRunningTime="2025-11-24 21:56:17.592306296 +0000 UTC m=+1060.509289668" Nov 24 21:56:18 crc kubenswrapper[4767]: I1124 21:56:18.233174 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 21:56:18 crc kubenswrapper[4767]: I1124 21:56:18.334264 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Nov 24 21:56:19 crc kubenswrapper[4767]: I1124 21:56:19.453149 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 21:56:19 crc kubenswrapper[4767]: I1124 21:56:19.530240 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f680b41-c2c3-4795-98df-05e64ad8ed95","Type":"ContainerStarted","Data":"257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217"} Nov 24 21:56:19 crc kubenswrapper[4767]: I1124 21:56:19.530346 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-log" containerID="cri-o://ef927980fb95532eb4c2650511d030e413a4d2c9ec2d82a381b8650e740c2a20" gracePeriod=30 Nov 24 21:56:19 crc kubenswrapper[4767]: I1124 21:56:19.530415 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-httpd" containerID="cri-o://5523fcd67b177ea805dd42f7eed90657fe3708537915be8cd080dcb7b145e833" gracePeriod=30 Nov 24 21:56:19 crc kubenswrapper[4767]: I1124 21:56:19.563581 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=32.563563543 podStartE2EDuration="32.563563543s" podCreationTimestamp="2025-11-24 21:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:19.557503561 +0000 UTC m=+1062.474486933" watchObservedRunningTime="2025-11-24 21:56:19.563563543 +0000 UTC m=+1062.480546915" Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.541364 4767 generic.go:334] "Generic (PLEG): container finished" podID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerID="5523fcd67b177ea805dd42f7eed90657fe3708537915be8cd080dcb7b145e833" exitCode=0 Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.541615 4767 generic.go:334] "Generic (PLEG): container finished" podID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerID="ef927980fb95532eb4c2650511d030e413a4d2c9ec2d82a381b8650e740c2a20" exitCode=143 Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.541440 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb2678c-0ca9-48c9-952d-a0933f8dc512","Type":"ContainerDied","Data":"5523fcd67b177ea805dd42f7eed90657fe3708537915be8cd080dcb7b145e833"} Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.541678 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb2678c-0ca9-48c9-952d-a0933f8dc512","Type":"ContainerDied","Data":"ef927980fb95532eb4c2650511d030e413a4d2c9ec2d82a381b8650e740c2a20"} Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.543784 4767 generic.go:334] "Generic (PLEG): container finished" podID="54aafebf-445c-4632-81c3-1f35b84a4ef7" containerID="f2b8544d895d08acb115bbcb716dc8d72b95ad8f72cf4551f1c82a0cd888ac92" exitCode=0 Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.543900 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2k8wb" event={"ID":"54aafebf-445c-4632-81c3-1f35b84a4ef7","Type":"ContainerDied","Data":"f2b8544d895d08acb115bbcb716dc8d72b95ad8f72cf4551f1c82a0cd888ac92"} Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.547733 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c4646f4f-mnjlq" event={"ID":"13d6c00a-8e06-47a6-b1c7-f32681fd7ddd","Type":"ContainerStarted","Data":"3aeb90df7a0b2acadaa3c02e72f681dbce5792e527a7f9fff0e711a90fba7c9b"} Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.578528 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78c4646f4f-mnjlq" podStartSLOduration=5.57850719 podStartE2EDuration="5.57850719s" podCreationTimestamp="2025-11-24 21:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:20.576134692 +0000 UTC m=+1063.493118074" watchObservedRunningTime="2025-11-24 21:56:20.57850719 +0000 UTC m=+1063.495490572" Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.610733 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.610711924 podStartE2EDuration="8.610711924s" podCreationTimestamp="2025-11-24 21:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:20.597506009 +0000 UTC m=+1063.514489381" watchObservedRunningTime="2025-11-24 21:56:20.610711924 +0000 UTC m=+1063.527695296" Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.805402 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.805446 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.813581 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:56:20 crc kubenswrapper[4767]: I1124 21:56:20.813659 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:56:21 crc kubenswrapper[4767]: I1124 21:56:21.561681 4767 generic.go:334] "Generic (PLEG): container finished" podID="83eba727-cd44-4013-8ce3-5672f4f7f595" containerID="0c03e0e66a3599ba2b540ceb043b24be074b360e6c0b32d2722f8a0986479037" exitCode=0 Nov 24 21:56:21 crc kubenswrapper[4767]: I1124 21:56:21.561771 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hd5nf" event={"ID":"83eba727-cd44-4013-8ce3-5672f4f7f595","Type":"ContainerDied","Data":"0c03e0e66a3599ba2b540ceb043b24be074b360e6c0b32d2722f8a0986479037"} Nov 24 21:56:21 crc kubenswrapper[4767]: I1124 21:56:21.562837 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.575568 4767 generic.go:334] "Generic (PLEG): container finished" podID="fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" containerID="88543f88cdf848cca677fbf0f060eaf50179873c8d4a13f37c36e487327e2ea8" exitCode=0 Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.575730 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9fp5" event={"ID":"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703","Type":"ContainerDied","Data":"88543f88cdf848cca677fbf0f060eaf50179873c8d4a13f37c36e487327e2ea8"} Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.771936 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.850602 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlswg\" (UniqueName: \"kubernetes.io/projected/54aafebf-445c-4632-81c3-1f35b84a4ef7-kube-api-access-wlswg\") pod \"54aafebf-445c-4632-81c3-1f35b84a4ef7\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.850692 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-credential-keys\") pod \"54aafebf-445c-4632-81c3-1f35b84a4ef7\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.850862 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-scripts\") pod \"54aafebf-445c-4632-81c3-1f35b84a4ef7\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.851030 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-combined-ca-bundle\") pod \"54aafebf-445c-4632-81c3-1f35b84a4ef7\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.851121 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-config-data\") pod \"54aafebf-445c-4632-81c3-1f35b84a4ef7\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.851190 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-fernet-keys\") pod \"54aafebf-445c-4632-81c3-1f35b84a4ef7\" (UID: \"54aafebf-445c-4632-81c3-1f35b84a4ef7\") " Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.857225 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "54aafebf-445c-4632-81c3-1f35b84a4ef7" (UID: "54aafebf-445c-4632-81c3-1f35b84a4ef7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.857326 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54aafebf-445c-4632-81c3-1f35b84a4ef7-kube-api-access-wlswg" (OuterVolumeSpecName: "kube-api-access-wlswg") pod "54aafebf-445c-4632-81c3-1f35b84a4ef7" (UID: "54aafebf-445c-4632-81c3-1f35b84a4ef7"). InnerVolumeSpecName "kube-api-access-wlswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.860779 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-scripts" (OuterVolumeSpecName: "scripts") pod "54aafebf-445c-4632-81c3-1f35b84a4ef7" (UID: "54aafebf-445c-4632-81c3-1f35b84a4ef7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.878400 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "54aafebf-445c-4632-81c3-1f35b84a4ef7" (UID: "54aafebf-445c-4632-81c3-1f35b84a4ef7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.899866 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54aafebf-445c-4632-81c3-1f35b84a4ef7" (UID: "54aafebf-445c-4632-81c3-1f35b84a4ef7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.909343 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-config-data" (OuterVolumeSpecName: "config-data") pod "54aafebf-445c-4632-81c3-1f35b84a4ef7" (UID: "54aafebf-445c-4632-81c3-1f35b84a4ef7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.923658 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.923788 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.954439 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.954459 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.954471 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.954479 4767 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.954487 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlswg\" (UniqueName: \"kubernetes.io/projected/54aafebf-445c-4632-81c3-1f35b84a4ef7-kube-api-access-wlswg\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.954496 4767 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54aafebf-445c-4632-81c3-1f35b84a4ef7-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.968727 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hd5nf" Nov 24 21:56:22 crc kubenswrapper[4767]: I1124 21:56:22.975167 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.003812 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.057099 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83eba727-cd44-4013-8ce3-5672f4f7f595-logs\") pod \"83eba727-cd44-4013-8ce3-5672f4f7f595\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.057232 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-scripts\") pod \"83eba727-cd44-4013-8ce3-5672f4f7f595\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.057280 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzm85\" (UniqueName: \"kubernetes.io/projected/83eba727-cd44-4013-8ce3-5672f4f7f595-kube-api-access-nzm85\") pod \"83eba727-cd44-4013-8ce3-5672f4f7f595\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.057441 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83eba727-cd44-4013-8ce3-5672f4f7f595-logs" (OuterVolumeSpecName: "logs") pod "83eba727-cd44-4013-8ce3-5672f4f7f595" (UID: "83eba727-cd44-4013-8ce3-5672f4f7f595"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.057457 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-combined-ca-bundle\") pod \"83eba727-cd44-4013-8ce3-5672f4f7f595\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.057550 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-config-data\") pod \"83eba727-cd44-4013-8ce3-5672f4f7f595\" (UID: \"83eba727-cd44-4013-8ce3-5672f4f7f595\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.058395 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83eba727-cd44-4013-8ce3-5672f4f7f595-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.063392 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-scripts" (OuterVolumeSpecName: "scripts") pod "83eba727-cd44-4013-8ce3-5672f4f7f595" (UID: "83eba727-cd44-4013-8ce3-5672f4f7f595"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.063453 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83eba727-cd44-4013-8ce3-5672f4f7f595-kube-api-access-nzm85" (OuterVolumeSpecName: "kube-api-access-nzm85") pod "83eba727-cd44-4013-8ce3-5672f4f7f595" (UID: "83eba727-cd44-4013-8ce3-5672f4f7f595"). InnerVolumeSpecName "kube-api-access-nzm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.078260 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.088138 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83eba727-cd44-4013-8ce3-5672f4f7f595" (UID: "83eba727-cd44-4013-8ce3-5672f4f7f595"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.092481 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-config-data" (OuterVolumeSpecName: "config-data") pod "83eba727-cd44-4013-8ce3-5672f4f7f595" (UID: "83eba727-cd44-4013-8ce3-5672f4f7f595"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160016 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160106 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-config-data\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160154 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-scripts\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160198 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-combined-ca-bundle\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160287 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-httpd-run\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160395 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-logs\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160429 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vl4r\" (UniqueName: \"kubernetes.io/projected/deb2678c-0ca9-48c9-952d-a0933f8dc512-kube-api-access-9vl4r\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160444 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-internal-tls-certs\") pod \"deb2678c-0ca9-48c9-952d-a0933f8dc512\" (UID: \"deb2678c-0ca9-48c9-952d-a0933f8dc512\") " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160840 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160858 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160866 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eba727-cd44-4013-8ce3-5672f4f7f595-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.160875 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzm85\" (UniqueName: \"kubernetes.io/projected/83eba727-cd44-4013-8ce3-5672f4f7f595-kube-api-access-nzm85\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.161832 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-logs" (OuterVolumeSpecName: "logs") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.163709 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.164386 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-scripts" (OuterVolumeSpecName: "scripts") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.164943 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb2678c-0ca9-48c9-952d-a0933f8dc512-kube-api-access-9vl4r" (OuterVolumeSpecName: "kube-api-access-9vl4r") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "kube-api-access-9vl4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.165080 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.191399 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.202367 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.202813 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.222115 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-config-data" (OuterVolumeSpecName: "config-data") pod "deb2678c-0ca9-48c9-952d-a0933f8dc512" (UID: "deb2678c-0ca9-48c9-952d-a0933f8dc512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.225905 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.230458 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.232082 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262489 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262518 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262528 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262536 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262545 4767 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262552 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb2678c-0ca9-48c9-952d-a0933f8dc512-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262560 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vl4r\" (UniqueName: \"kubernetes.io/projected/deb2678c-0ca9-48c9-952d-a0933f8dc512-kube-api-access-9vl4r\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.262569 4767 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb2678c-0ca9-48c9-952d-a0933f8dc512-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.283067 4767 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.315294 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.344921 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.365000 4767 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.397461 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.483025 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2cfjb"] Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.483415 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" podUID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerName="dnsmasq-dns" containerID="cri-o://25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3" gracePeriod=10 Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.626569 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb2678c-0ca9-48c9-952d-a0933f8dc512","Type":"ContainerDied","Data":"a6b51f864e27e5f3001385863f9a37d5d1d1acc2078c09374908a0d71726f1fb"} Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.626618 4767 scope.go:117] "RemoveContainer" containerID="5523fcd67b177ea805dd42f7eed90657fe3708537915be8cd080dcb7b145e833" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.626744 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.657734 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hd5nf" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.658480 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hd5nf" event={"ID":"83eba727-cd44-4013-8ce3-5672f4f7f595","Type":"ContainerDied","Data":"5cfe502469f5930b5bfd39de360d3f90a715dc5cec8d4446ce3a77b2b6635a36"} Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.658521 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cfe502469f5930b5bfd39de360d3f90a715dc5cec8d4446ce3a77b2b6635a36" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.733770 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2k8wb" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.741054 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2k8wb" event={"ID":"54aafebf-445c-4632-81c3-1f35b84a4ef7","Type":"ContainerDied","Data":"0f37ac4f1a685b4636ec421e22c8790819a512c301be03ee3feb5f89c9ed784d"} Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.741120 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f37ac4f1a685b4636ec421e22c8790819a512c301be03ee3feb5f89c9ed784d" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.741166 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.742509 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.742529 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.768218 4767 scope.go:117] "RemoveContainer" containerID="ef927980fb95532eb4c2650511d030e413a4d2c9ec2d82a381b8650e740c2a20" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.775364 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.797421 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.800546 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.806672 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824241 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:56:23 crc kubenswrapper[4767]: E1124 21:56:23.824618 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-log" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824628 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-log" Nov 24 21:56:23 crc kubenswrapper[4767]: E1124 21:56:23.824644 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83eba727-cd44-4013-8ce3-5672f4f7f595" containerName="placement-db-sync" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824651 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="83eba727-cd44-4013-8ce3-5672f4f7f595" containerName="placement-db-sync" Nov 24 21:56:23 crc kubenswrapper[4767]: E1124 21:56:23.824675 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-httpd" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824680 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-httpd" Nov 24 21:56:23 crc kubenswrapper[4767]: E1124 21:56:23.824696 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aafebf-445c-4632-81c3-1f35b84a4ef7" containerName="keystone-bootstrap" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824701 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aafebf-445c-4632-81c3-1f35b84a4ef7" containerName="keystone-bootstrap" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824862 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aafebf-445c-4632-81c3-1f35b84a4ef7" containerName="keystone-bootstrap" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824873 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-log" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824888 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" containerName="glance-httpd" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.824904 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="83eba727-cd44-4013-8ce3-5672f4f7f595" containerName="placement-db-sync" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.825805 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.826797 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.831919 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.832181 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.839681 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.875159 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.875195 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.875215 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.875253 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.875286 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ntq\" (UniqueName: \"kubernetes.io/projected/1fa93f02-121b-49f9-a08b-e04f44a142f8-kube-api-access-f8ntq\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.875313 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.880235 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.880317 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.983190 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.983246 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.983320 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.990167 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.990214 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.990308 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.990339 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8ntq\" (UniqueName: \"kubernetes.io/projected/1fa93f02-121b-49f9-a08b-e04f44a142f8-kube-api-access-f8ntq\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.990375 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.992694 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.995956 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.997223 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:23 crc kubenswrapper[4767]: I1124 21:56:23.998261 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.015779 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.016949 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.021465 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8ntq\" (UniqueName: \"kubernetes.io/projected/1fa93f02-121b-49f9-a08b-e04f44a142f8-kube-api-access-f8ntq\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.031204 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.033640 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7574cdc49f-grwcx"] Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.042115 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.048842 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.053006 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbsgd" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.053251 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.054169 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.055710 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.055947 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.056113 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.096569 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7574cdc49f-grwcx"] Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098040 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-combined-ca-bundle\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098073 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-config-data\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098094 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-internal-tls-certs\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098162 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-scripts\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098189 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-public-tls-certs\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098204 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-fernet-keys\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098261 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-credential-keys\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.098307 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zhv\" (UniqueName: \"kubernetes.io/projected/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-kube-api-access-m2zhv\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.113539 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d6f9dff64-d2zkv"] Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.118100 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.126740 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.127735 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.127841 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jmhbv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.128159 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.128682 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.132710 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d6f9dff64-d2zkv"] Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.153602 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.208553 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-scripts\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.208859 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-public-tls-certs\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.208905 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-fernet-keys\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.208944 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-credential-keys\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.209002 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2zhv\" (UniqueName: \"kubernetes.io/projected/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-kube-api-access-m2zhv\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.209028 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-combined-ca-bundle\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.209067 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-config-data\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.209086 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-internal-tls-certs\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.214502 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-internal-tls-certs\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.216306 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-fernet-keys\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.217477 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-combined-ca-bundle\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.217614 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-credential-keys\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.220838 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-config-data\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.222931 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-scripts\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.225384 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-public-tls-certs\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.230840 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2zhv\" (UniqueName: \"kubernetes.io/projected/962cbac3-dc40-4b91-a5ca-69c6fb9ad020-kube-api-access-m2zhv\") pod \"keystone-7574cdc49f-grwcx\" (UID: \"962cbac3-dc40-4b91-a5ca-69c6fb9ad020\") " pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310211 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3198f8-260a-4ccd-a470-100aa54835c0-logs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310367 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-combined-ca-bundle\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310391 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-internal-tls-certs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310411 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2nz\" (UniqueName: \"kubernetes.io/projected/4e3198f8-260a-4ccd-a470-100aa54835c0-kube-api-access-2x2nz\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310433 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-config-data\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310462 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-scripts\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310468 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.310477 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-public-tls-certs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.346947 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb2678c-0ca9-48c9-952d-a0933f8dc512" path="/var/lib/kubelet/pods/deb2678c-0ca9-48c9-952d-a0933f8dc512/volumes" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.412404 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-combined-ca-bundle\") pod \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.412597 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqpjs\" (UniqueName: \"kubernetes.io/projected/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-kube-api-access-xqpjs\") pod \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.412629 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-db-sync-config-data\") pod \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\" (UID: \"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.412907 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-combined-ca-bundle\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.412961 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-internal-tls-certs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.412979 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x2nz\" (UniqueName: \"kubernetes.io/projected/4e3198f8-260a-4ccd-a470-100aa54835c0-kube-api-access-2x2nz\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.413036 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-config-data\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.413073 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-scripts\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.413097 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-public-tls-certs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.413145 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3198f8-260a-4ccd-a470-100aa54835c0-logs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.417069 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-kube-api-access-xqpjs" (OuterVolumeSpecName: "kube-api-access-xqpjs") pod "fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" (UID: "fd6b50ba-b398-4a5f-bfc0-fd909ddf2703"). InnerVolumeSpecName "kube-api-access-xqpjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.417917 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" (UID: "fd6b50ba-b398-4a5f-bfc0-fd909ddf2703"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.419039 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e3198f8-260a-4ccd-a470-100aa54835c0-logs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.420290 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-internal-tls-certs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.432658 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-config-data\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.439788 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-public-tls-certs\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.440104 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-scripts\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.440423 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198f8-260a-4ccd-a470-100aa54835c0-combined-ca-bundle\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.450409 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x2nz\" (UniqueName: \"kubernetes.io/projected/4e3198f8-260a-4ccd-a470-100aa54835c0-kube-api-access-2x2nz\") pod \"placement-7d6f9dff64-d2zkv\" (UID: \"4e3198f8-260a-4ccd-a470-100aa54835c0\") " pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.453694 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.459905 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" (UID: "fd6b50ba-b398-4a5f-bfc0-fd909ddf2703"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.478838 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.514917 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.515443 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqpjs\" (UniqueName: \"kubernetes.io/projected/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-kube-api-access-xqpjs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.515469 4767 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.515478 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.621208 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzbln\" (UniqueName: \"kubernetes.io/projected/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-kube-api-access-jzbln\") pod \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.621281 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-svc\") pod \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.621491 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-swift-storage-0\") pod \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.621545 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-config\") pod \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.621637 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-nb\") pod \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.621685 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-sb\") pod \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\" (UID: \"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0\") " Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.650910 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-kube-api-access-jzbln" (OuterVolumeSpecName: "kube-api-access-jzbln") pod "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" (UID: "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0"). InnerVolumeSpecName "kube-api-access-jzbln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.697443 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" (UID: "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.727743 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.727769 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzbln\" (UniqueName: \"kubernetes.io/projected/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-kube-api-access-jzbln\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.741519 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-config" (OuterVolumeSpecName: "config") pod "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" (UID: "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.769953 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" (UID: "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.775510 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" (UID: "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.777143 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerStarted","Data":"7939c13920e5f49beeeaa6b27de898ca5f5a9be94aaab406ff160abeaab191c6"} Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.789420 4767 generic.go:334] "Generic (PLEG): container finished" podID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerID="25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3" exitCode=0 Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.789523 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" event={"ID":"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0","Type":"ContainerDied","Data":"25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3"} Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.789552 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" event={"ID":"42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0","Type":"ContainerDied","Data":"c513fbeaba569fb1da7c4e331a36feb4b0ef93a8185ff88576c0aae966d57826"} Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.789570 4767 scope.go:117] "RemoveContainer" containerID="25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.789702 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2cfjb" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.807933 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9fp5" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.809312 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9fp5" event={"ID":"fd6b50ba-b398-4a5f-bfc0-fd909ddf2703","Type":"ContainerDied","Data":"74bfa38559c16c38a429150ddd4004bd83260c1d928270ecd961782830fd943c"} Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.809354 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74bfa38559c16c38a429150ddd4004bd83260c1d928270ecd961782830fd943c" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.823586 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" (UID: "42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.829531 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.829772 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.829781 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.829792 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.879721 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-dbc6679f5-nfj96"] Nov 24 21:56:24 crc kubenswrapper[4767]: E1124 21:56:24.880085 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" containerName="barbican-db-sync" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.880095 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" containerName="barbican-db-sync" Nov 24 21:56:24 crc kubenswrapper[4767]: E1124 21:56:24.880130 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerName="init" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.880136 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerName="init" Nov 24 21:56:24 crc kubenswrapper[4767]: E1124 21:56:24.880154 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerName="dnsmasq-dns" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.880160 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerName="dnsmasq-dns" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.880333 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" containerName="barbican-db-sync" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.880343 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" containerName="dnsmasq-dns" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.881236 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.887016 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hk9tc" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.887197 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.887309 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.939319 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c6fc47588-98bn5"] Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.940808 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.941714 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-config-data\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.941790 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-config-data-custom\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.941814 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnl8g\" (UniqueName: \"kubernetes.io/projected/bc658137-f491-4e87-bdaa-cdc34f59a3a9-kube-api-access-pnl8g\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.942546 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc658137-f491-4e87-bdaa-cdc34f59a3a9-logs\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.942600 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-combined-ca-bundle\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.945660 4767 scope.go:117] "RemoveContainer" containerID="a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.959754 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.975039 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-dbc6679f5-nfj96"] Nov 24 21:56:24 crc kubenswrapper[4767]: I1124 21:56:24.988859 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c6fc47588-98bn5"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.046631 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlvr\" (UniqueName: \"kubernetes.io/projected/13440493-b7a7-40a6-9de1-e375ae1c8404-kube-api-access-gxlvr\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.046689 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc658137-f491-4e87-bdaa-cdc34f59a3a9-logs\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.046801 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-combined-ca-bundle\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.046905 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-combined-ca-bundle\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.046929 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-config-data\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.046968 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-config-data-custom\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.046989 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-config-data-custom\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.047020 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnl8g\" (UniqueName: \"kubernetes.io/projected/bc658137-f491-4e87-bdaa-cdc34f59a3a9-kube-api-access-pnl8g\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.047099 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13440493-b7a7-40a6-9de1-e375ae1c8404-logs\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.047169 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-config-data\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.048809 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc658137-f491-4e87-bdaa-cdc34f59a3a9-logs\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: W1124 21:56:25.065443 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fa93f02_121b_49f9_a08b_e04f44a142f8.slice/crio-b25ba1efa2f8f42625562634179f200a08ca392c68452b58f5cc2f139463bcfa WatchSource:0}: Error finding container b25ba1efa2f8f42625562634179f200a08ca392c68452b58f5cc2f139463bcfa: Status 404 returned error can't find the container with id b25ba1efa2f8f42625562634179f200a08ca392c68452b58f5cc2f139463bcfa Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.066860 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-config-data\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.068855 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-combined-ca-bundle\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.068981 4767 scope.go:117] "RemoveContainer" containerID="25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3" Nov 24 21:56:25 crc kubenswrapper[4767]: E1124 21:56:25.072535 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3\": container with ID starting with 25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3 not found: ID does not exist" containerID="25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.072587 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3"} err="failed to get container status \"25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3\": rpc error: code = NotFound desc = could not find container \"25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3\": container with ID starting with 25225460c6941b06378021c4663dacc4d933c11a1886864d79621f5483bae9c3 not found: ID does not exist" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.072618 4767 scope.go:117] "RemoveContainer" containerID="a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.072961 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnl8g\" (UniqueName: \"kubernetes.io/projected/bc658137-f491-4e87-bdaa-cdc34f59a3a9-kube-api-access-pnl8g\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.076326 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bzgpq"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.078236 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.080711 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc658137-f491-4e87-bdaa-cdc34f59a3a9-config-data-custom\") pod \"barbican-worker-dbc6679f5-nfj96\" (UID: \"bc658137-f491-4e87-bdaa-cdc34f59a3a9\") " pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: E1124 21:56:25.082870 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb\": container with ID starting with a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb not found: ID does not exist" containerID="a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.083192 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb"} err="failed to get container status \"a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb\": rpc error: code = NotFound desc = could not find container \"a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb\": container with ID starting with a162fff69b46ab614d15ba59c59c5d1e99e042cb74e22b561c6faa03e563ffdb not found: ID does not exist" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.140555 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.153224 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-config-data\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.153543 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-svc\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.153754 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxlvr\" (UniqueName: \"kubernetes.io/projected/13440493-b7a7-40a6-9de1-e375ae1c8404-kube-api-access-gxlvr\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.153883 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-config\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.154033 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.154115 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-combined-ca-bundle\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.154206 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.154302 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-config-data-custom\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.154452 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13440493-b7a7-40a6-9de1-e375ae1c8404-logs\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.154562 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn25k\" (UniqueName: \"kubernetes.io/projected/fa66113b-5836-4a14-be15-8f2ef6093310-kube-api-access-qn25k\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.154637 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.164160 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bzgpq"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.164917 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13440493-b7a7-40a6-9de1-e375ae1c8404-logs\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.170780 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-config-data\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.175347 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-config-data-custom\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.177150 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13440493-b7a7-40a6-9de1-e375ae1c8404-combined-ca-bundle\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.182440 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxlvr\" (UniqueName: \"kubernetes.io/projected/13440493-b7a7-40a6-9de1-e375ae1c8404-kube-api-access-gxlvr\") pod \"barbican-keystone-listener-c6fc47588-98bn5\" (UID: \"13440493-b7a7-40a6-9de1-e375ae1c8404\") " pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.198570 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69dfb67c9d-pwwx6"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.215793 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.223494 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69dfb67c9d-pwwx6"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.223638 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.246350 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-dbc6679f5-nfj96" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.256931 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-config\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.256990 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257020 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257049 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-combined-ca-bundle\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257079 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data-custom\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257101 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40072229-e5be-485f-82d8-7e8c17e2c8c3-logs\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257125 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzv7x\" (UniqueName: \"kubernetes.io/projected/40072229-e5be-485f-82d8-7e8c17e2c8c3-kube-api-access-fzv7x\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257143 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn25k\" (UniqueName: \"kubernetes.io/projected/fa66113b-5836-4a14-be15-8f2ef6093310-kube-api-access-qn25k\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257160 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257189 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-svc\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.257204 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.258163 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.258697 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.259704 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-config\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.262504 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-svc\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.264811 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.273545 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.280441 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn25k\" (UniqueName: \"kubernetes.io/projected/fa66113b-5836-4a14-be15-8f2ef6093310-kube-api-access-qn25k\") pod \"dnsmasq-dns-85ff748b95-bzgpq\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.297140 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2cfjb"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.314341 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2cfjb"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.351159 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d6f9dff64-d2zkv"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.365881 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data-custom\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.365978 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40072229-e5be-485f-82d8-7e8c17e2c8c3-logs\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.366073 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzv7x\" (UniqueName: \"kubernetes.io/projected/40072229-e5be-485f-82d8-7e8c17e2c8c3-kube-api-access-fzv7x\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.366152 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.366952 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-combined-ca-bundle\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.367558 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40072229-e5be-485f-82d8-7e8c17e2c8c3-logs\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.382647 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data-custom\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.384756 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzv7x\" (UniqueName: \"kubernetes.io/projected/40072229-e5be-485f-82d8-7e8c17e2c8c3-kube-api-access-fzv7x\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.385717 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.387800 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7574cdc49f-grwcx"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.399695 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-combined-ca-bundle\") pod \"barbican-api-69dfb67c9d-pwwx6\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.406310 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.579585 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.842241 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d6f9dff64-d2zkv" event={"ID":"4e3198f8-260a-4ccd-a470-100aa54835c0","Type":"ContainerStarted","Data":"0ab1f26c23b472e8b479b7928985c8b7c7c7fc652d17416c4ad1072b88b4ec41"} Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.862470 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7574cdc49f-grwcx" event={"ID":"962cbac3-dc40-4b91-a5ca-69c6fb9ad020","Type":"ContainerStarted","Data":"357445eebdfe2d0cce30181ea90ccf03e084fa75104e8f091e7437e112bfde3b"} Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.862746 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7574cdc49f-grwcx" event={"ID":"962cbac3-dc40-4b91-a5ca-69c6fb9ad020","Type":"ContainerStarted","Data":"33b71d76f4c5b736370470a2e33d8fea79730463dbcafaae12b361b1e9a7c500"} Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.862798 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.869302 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fa93f02-121b-49f9-a08b-e04f44a142f8","Type":"ContainerStarted","Data":"b25ba1efa2f8f42625562634179f200a08ca392c68452b58f5cc2f139463bcfa"} Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.870373 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.895384 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7574cdc49f-grwcx" podStartSLOduration=2.8953662270000002 podStartE2EDuration="2.895366227s" podCreationTimestamp="2025-11-24 21:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:25.885048834 +0000 UTC m=+1068.802032216" watchObservedRunningTime="2025-11-24 21:56:25.895366227 +0000 UTC m=+1068.812349599" Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.953312 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-dbc6679f5-nfj96"] Nov 24 21:56:25 crc kubenswrapper[4767]: I1124 21:56:25.959000 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c6fc47588-98bn5"] Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.169962 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bzgpq"] Nov 24 21:56:26 crc kubenswrapper[4767]: W1124 21:56:26.178110 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa66113b_5836_4a14_be15_8f2ef6093310.slice/crio-48994deeb70ef0d37db1da11f835db614712a42e17dbc2690dac22983baaa514 WatchSource:0}: Error finding container 48994deeb70ef0d37db1da11f835db614712a42e17dbc2690dac22983baaa514: Status 404 returned error can't find the container with id 48994deeb70ef0d37db1da11f835db614712a42e17dbc2690dac22983baaa514 Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.309059 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69dfb67c9d-pwwx6"] Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.351769 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0" path="/var/lib/kubelet/pods/42ae8d95-1b28-49a4-8ec3-23e8a4c9e6e0/volumes" Nov 24 21:56:26 crc kubenswrapper[4767]: W1124 21:56:26.379649 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40072229_e5be_485f_82d8_7e8c17e2c8c3.slice/crio-b93cc17fc18f51f3dec8a62cf8a296f2448387656e4028663565846c543744bb WatchSource:0}: Error finding container b93cc17fc18f51f3dec8a62cf8a296f2448387656e4028663565846c543744bb: Status 404 returned error can't find the container with id b93cc17fc18f51f3dec8a62cf8a296f2448387656e4028663565846c543744bb Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.895330 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dfb67c9d-pwwx6" event={"ID":"40072229-e5be-485f-82d8-7e8c17e2c8c3","Type":"ContainerStarted","Data":"a6b69d1ac3731e59f6c52d083d504bb76c12faaca5751dbde17d0f3d0a2caf1e"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.895372 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dfb67c9d-pwwx6" event={"ID":"40072229-e5be-485f-82d8-7e8c17e2c8c3","Type":"ContainerStarted","Data":"b93cc17fc18f51f3dec8a62cf8a296f2448387656e4028663565846c543744bb"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.908193 4767 generic.go:334] "Generic (PLEG): container finished" podID="fa66113b-5836-4a14-be15-8f2ef6093310" containerID="aedb0dbece973442c87095e420dec4503758c6c0eddabc0ed38179a3961be402" exitCode=0 Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.908285 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" event={"ID":"fa66113b-5836-4a14-be15-8f2ef6093310","Type":"ContainerDied","Data":"aedb0dbece973442c87095e420dec4503758c6c0eddabc0ed38179a3961be402"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.908313 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" event={"ID":"fa66113b-5836-4a14-be15-8f2ef6093310","Type":"ContainerStarted","Data":"48994deeb70ef0d37db1da11f835db614712a42e17dbc2690dac22983baaa514"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.917012 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.922101 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fa93f02-121b-49f9-a08b-e04f44a142f8","Type":"ContainerStarted","Data":"ee8d79c430c570e23cc92c31a317c8619eb8070684ee32ea9790451e8ccd57b9"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.927732 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.948745 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d6f9dff64-d2zkv" event={"ID":"4e3198f8-260a-4ccd-a470-100aa54835c0","Type":"ContainerStarted","Data":"caead0e17cf069497f3ae2a53fcb9855e93955267c14d290c29125e163f45598"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.948784 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d6f9dff64-d2zkv" event={"ID":"4e3198f8-260a-4ccd-a470-100aa54835c0","Type":"ContainerStarted","Data":"f75c5d65e569e97c03bd991e25c6957c8e5968140bc16a6477f61aecc4f4a64b"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.949571 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.949600 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.962414 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" event={"ID":"13440493-b7a7-40a6-9de1-e375ae1c8404","Type":"ContainerStarted","Data":"4cc9f44db26c7128d440cb36e2064616ebc2e09938be332e834edc53622092e3"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.965184 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-dbc6679f5-nfj96" event={"ID":"bc658137-f491-4e87-bdaa-cdc34f59a3a9","Type":"ContainerStarted","Data":"102ffaec46899253f8fb8318b51dae758ef6dcd657052dc1feaa4f01bb9e86d9"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.967895 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tzcqj" event={"ID":"128eda36-f009-47c2-8939-73ec23da0d4c","Type":"ContainerStarted","Data":"164fc379f0c8290b0e60bd9c89caa60822e3fe36fedd06083adb12c19c5e3408"} Nov 24 21:56:26 crc kubenswrapper[4767]: I1124 21:56:26.982115 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d6f9dff64-d2zkv" podStartSLOduration=2.982082361 podStartE2EDuration="2.982082361s" podCreationTimestamp="2025-11-24 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:26.977963444 +0000 UTC m=+1069.894946826" watchObservedRunningTime="2025-11-24 21:56:26.982082361 +0000 UTC m=+1069.899065733" Nov 24 21:56:27 crc kubenswrapper[4767]: I1124 21:56:27.028892 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tzcqj" podStartSLOduration=4.824330925 podStartE2EDuration="47.02887497s" podCreationTimestamp="2025-11-24 21:55:40 +0000 UTC" firstStartedPulling="2025-11-24 21:55:42.661374455 +0000 UTC m=+1025.578357827" lastFinishedPulling="2025-11-24 21:56:24.8659185 +0000 UTC m=+1067.782901872" observedRunningTime="2025-11-24 21:56:27.027651355 +0000 UTC m=+1069.944634727" watchObservedRunningTime="2025-11-24 21:56:27.02887497 +0000 UTC m=+1069.945858342" Nov 24 21:56:27 crc kubenswrapper[4767]: I1124 21:56:27.979420 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" event={"ID":"fa66113b-5836-4a14-be15-8f2ef6093310","Type":"ContainerStarted","Data":"efa627a463412baccc8a672fb208753e727137216839867333d72081681dd5b1"} Nov 24 21:56:27 crc kubenswrapper[4767]: I1124 21:56:27.979988 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:27 crc kubenswrapper[4767]: I1124 21:56:27.981260 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fa93f02-121b-49f9-a08b-e04f44a142f8","Type":"ContainerStarted","Data":"5bc1de983909cf8b558f2c3434823057f0116823799e07bcb1042fc8ecec3d57"} Nov 24 21:56:27 crc kubenswrapper[4767]: I1124 21:56:27.985072 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dfb67c9d-pwwx6" event={"ID":"40072229-e5be-485f-82d8-7e8c17e2c8c3","Type":"ContainerStarted","Data":"da6de2ed159121426438867b09047e89a7cbff50cfd1ce0aeb95313b04dda7e0"} Nov 24 21:56:27 crc kubenswrapper[4767]: I1124 21:56:27.985107 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:27 crc kubenswrapper[4767]: I1124 21:56:27.985568 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.026134 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.026113344 podStartE2EDuration="5.026113344s" podCreationTimestamp="2025-11-24 21:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:28.024238581 +0000 UTC m=+1070.941221943" watchObservedRunningTime="2025-11-24 21:56:28.026113344 +0000 UTC m=+1070.943096716" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.026377 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" podStartSLOduration=4.026371711 podStartE2EDuration="4.026371711s" podCreationTimestamp="2025-11-24 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:28.004554072 +0000 UTC m=+1070.921537444" watchObservedRunningTime="2025-11-24 21:56:28.026371711 +0000 UTC m=+1070.943355083" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.067867 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69dfb67c9d-pwwx6" podStartSLOduration=3.067849489 podStartE2EDuration="3.067849489s" podCreationTimestamp="2025-11-24 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:28.048634713 +0000 UTC m=+1070.965618085" watchObservedRunningTime="2025-11-24 21:56:28.067849489 +0000 UTC m=+1070.984832861" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.590636 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9d666dcfd-kpjw6"] Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.592048 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.594444 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.594675 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.655315 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9d666dcfd-kpjw6"] Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.671349 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-internal-tls-certs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.671601 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-config-data\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.671710 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-combined-ca-bundle\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.671790 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-public-tls-certs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.671871 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-config-data-custom\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.671963 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgz7t\" (UniqueName: \"kubernetes.io/projected/521b6c97-0928-488c-a85c-0b2e777cae87-kube-api-access-lgz7t\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.672074 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/521b6c97-0928-488c-a85c-0b2e777cae87-logs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.778415 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-internal-tls-certs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.778508 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-config-data\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.778591 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-combined-ca-bundle\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.778620 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-public-tls-certs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.778650 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-config-data-custom\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.778682 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgz7t\" (UniqueName: \"kubernetes.io/projected/521b6c97-0928-488c-a85c-0b2e777cae87-kube-api-access-lgz7t\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.778719 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/521b6c97-0928-488c-a85c-0b2e777cae87-logs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.779395 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/521b6c97-0928-488c-a85c-0b2e777cae87-logs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.784191 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-public-tls-certs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.784810 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-internal-tls-certs\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.785059 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-config-data\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.787728 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-config-data-custom\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.787864 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6c97-0928-488c-a85c-0b2e777cae87-combined-ca-bundle\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.797170 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgz7t\" (UniqueName: \"kubernetes.io/projected/521b6c97-0928-488c-a85c-0b2e777cae87-kube-api-access-lgz7t\") pod \"barbican-api-9d666dcfd-kpjw6\" (UID: \"521b6c97-0928-488c-a85c-0b2e777cae87\") " pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:28 crc kubenswrapper[4767]: I1124 21:56:28.914373 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:29 crc kubenswrapper[4767]: I1124 21:56:29.641467 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9d666dcfd-kpjw6"] Nov 24 21:56:29 crc kubenswrapper[4767]: W1124 21:56:29.658359 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod521b6c97_0928_488c_a85c_0b2e777cae87.slice/crio-a5a8d4a9cd15692565b06647e77c8f23f7a62c87c45573b771bdea7343adca29 WatchSource:0}: Error finding container a5a8d4a9cd15692565b06647e77c8f23f7a62c87c45573b771bdea7343adca29: Status 404 returned error can't find the container with id a5a8d4a9cd15692565b06647e77c8f23f7a62c87c45573b771bdea7343adca29 Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.029151 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" event={"ID":"13440493-b7a7-40a6-9de1-e375ae1c8404","Type":"ContainerStarted","Data":"ac7d3d2fa7ae3cfb5213d40858df54b69c9ae71bfcfe55990c12bf8e1c673e2e"} Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.029459 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" event={"ID":"13440493-b7a7-40a6-9de1-e375ae1c8404","Type":"ContainerStarted","Data":"9ad0fd671d00646a96c7ef377cdcbcef620a65e47ce7ae3ac70ee9db645a9b4e"} Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.031476 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-dbc6679f5-nfj96" event={"ID":"bc658137-f491-4e87-bdaa-cdc34f59a3a9","Type":"ContainerStarted","Data":"1e606bba5883786595e201b3241e83e80f7a4e6362a5b81deea5efdefc8c7f3c"} Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.031501 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-dbc6679f5-nfj96" event={"ID":"bc658137-f491-4e87-bdaa-cdc34f59a3a9","Type":"ContainerStarted","Data":"ce2e0057e31755bc6aa2232e34397927b5ae6b010e6e76483625a7c8303f9b93"} Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.035391 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d666dcfd-kpjw6" event={"ID":"521b6c97-0928-488c-a85c-0b2e777cae87","Type":"ContainerStarted","Data":"3050038c8b38d6f48eb8f5dfef3505585ceff4c936a4f8c1256684f611e2ed1b"} Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.035448 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d666dcfd-kpjw6" event={"ID":"521b6c97-0928-488c-a85c-0b2e777cae87","Type":"ContainerStarted","Data":"a5a8d4a9cd15692565b06647e77c8f23f7a62c87c45573b771bdea7343adca29"} Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.053496 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c6fc47588-98bn5" podStartSLOduration=2.895485963 podStartE2EDuration="6.053478224s" podCreationTimestamp="2025-11-24 21:56:24 +0000 UTC" firstStartedPulling="2025-11-24 21:56:26.007492951 +0000 UTC m=+1068.924476323" lastFinishedPulling="2025-11-24 21:56:29.165485212 +0000 UTC m=+1072.082468584" observedRunningTime="2025-11-24 21:56:30.046903488 +0000 UTC m=+1072.963886850" watchObservedRunningTime="2025-11-24 21:56:30.053478224 +0000 UTC m=+1072.970461596" Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.072229 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-dbc6679f5-nfj96" podStartSLOduration=2.880036734 podStartE2EDuration="6.072212276s" podCreationTimestamp="2025-11-24 21:56:24 +0000 UTC" firstStartedPulling="2025-11-24 21:56:25.973347361 +0000 UTC m=+1068.890330733" lastFinishedPulling="2025-11-24 21:56:29.165522893 +0000 UTC m=+1072.082506275" observedRunningTime="2025-11-24 21:56:30.067280906 +0000 UTC m=+1072.984264298" watchObservedRunningTime="2025-11-24 21:56:30.072212276 +0000 UTC m=+1072.989195648" Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.805210 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6d69c9d5c6-qr8nq" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Nov 24 21:56:30 crc kubenswrapper[4767]: I1124 21:56:30.823577 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-567c96d68-4rmbm" podUID="f3a751ba-fb23-4cd3-a1f7-2c843e04ab47" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Nov 24 21:56:31 crc kubenswrapper[4767]: I1124 21:56:31.049826 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d666dcfd-kpjw6" event={"ID":"521b6c97-0928-488c-a85c-0b2e777cae87","Type":"ContainerStarted","Data":"e857bd92c193f0c70ad72443f348d6656bef2475594563d710ae48be660bb162"} Nov 24 21:56:31 crc kubenswrapper[4767]: I1124 21:56:31.076482 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9d666dcfd-kpjw6" podStartSLOduration=3.076465359 podStartE2EDuration="3.076465359s" podCreationTimestamp="2025-11-24 21:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:31.066435835 +0000 UTC m=+1073.983419207" watchObservedRunningTime="2025-11-24 21:56:31.076465359 +0000 UTC m=+1073.993448731" Nov 24 21:56:31 crc kubenswrapper[4767]: I1124 21:56:31.329570 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:31 crc kubenswrapper[4767]: I1124 21:56:31.329816 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api-log" containerID="cri-o://eed7c5396bda53207c76f488a9d06177b3be055e486a84e1090102f0579a4531" gracePeriod=30 Nov 24 21:56:31 crc kubenswrapper[4767]: I1124 21:56:31.329922 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api" containerID="cri-o://b94edabda1034bf0bd7f22ca6890896c09d5b2b7477fb6b5ea59044cde213744" gracePeriod=30 Nov 24 21:56:32 crc kubenswrapper[4767]: I1124 21:56:32.062835 4767 generic.go:334] "Generic (PLEG): container finished" podID="128eda36-f009-47c2-8939-73ec23da0d4c" containerID="164fc379f0c8290b0e60bd9c89caa60822e3fe36fedd06083adb12c19c5e3408" exitCode=0 Nov 24 21:56:32 crc kubenswrapper[4767]: I1124 21:56:32.062927 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tzcqj" event={"ID":"128eda36-f009-47c2-8939-73ec23da0d4c","Type":"ContainerDied","Data":"164fc379f0c8290b0e60bd9c89caa60822e3fe36fedd06083adb12c19c5e3408"} Nov 24 21:56:32 crc kubenswrapper[4767]: I1124 21:56:32.068892 4767 generic.go:334] "Generic (PLEG): container finished" podID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerID="eed7c5396bda53207c76f488a9d06177b3be055e486a84e1090102f0579a4531" exitCode=143 Nov 24 21:56:32 crc kubenswrapper[4767]: I1124 21:56:32.068996 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb","Type":"ContainerDied","Data":"eed7c5396bda53207c76f488a9d06177b3be055e486a84e1090102f0579a4531"} Nov 24 21:56:32 crc kubenswrapper[4767]: I1124 21:56:32.069080 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:32 crc kubenswrapper[4767]: I1124 21:56:32.069096 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:34 crc kubenswrapper[4767]: I1124 21:56:34.155253 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:34 crc kubenswrapper[4767]: I1124 21:56:34.156087 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:34 crc kubenswrapper[4767]: I1124 21:56:34.241489 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:34 crc kubenswrapper[4767]: I1124 21:56:34.247452 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:34 crc kubenswrapper[4767]: I1124 21:56:34.480040 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": read tcp 10.217.0.2:46388->10.217.0.167:9322: read: connection reset by peer" Nov 24 21:56:34 crc kubenswrapper[4767]: I1124 21:56:34.480081 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": read tcp 10.217.0.2:46378->10.217.0.167:9322: read: connection reset by peer" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.110375 4767 generic.go:334] "Generic (PLEG): container finished" podID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerID="b94edabda1034bf0bd7f22ca6890896c09d5b2b7477fb6b5ea59044cde213744" exitCode=0 Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.110575 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb","Type":"ContainerDied","Data":"b94edabda1034bf0bd7f22ca6890896c09d5b2b7477fb6b5ea59044cde213744"} Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.111210 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.111414 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.422387 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.484511 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.484566 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.486358 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vkcs7"] Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.486872 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" podUID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerName="dnsmasq-dns" containerID="cri-o://b7e62785ab79523ea265bb9f0ab2bb098e95a50d92a88879ac58a4b1c3bb9433" gracePeriod=10 Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.530137 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.646832 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-config-data\") pod \"128eda36-f009-47c2-8939-73ec23da0d4c\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.646872 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29fsh\" (UniqueName: \"kubernetes.io/projected/128eda36-f009-47c2-8939-73ec23da0d4c-kube-api-access-29fsh\") pod \"128eda36-f009-47c2-8939-73ec23da0d4c\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.646961 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-db-sync-config-data\") pod \"128eda36-f009-47c2-8939-73ec23da0d4c\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.646992 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/128eda36-f009-47c2-8939-73ec23da0d4c-etc-machine-id\") pod \"128eda36-f009-47c2-8939-73ec23da0d4c\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.647028 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-combined-ca-bundle\") pod \"128eda36-f009-47c2-8939-73ec23da0d4c\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.647055 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-scripts\") pod \"128eda36-f009-47c2-8939-73ec23da0d4c\" (UID: \"128eda36-f009-47c2-8939-73ec23da0d4c\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.648116 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/128eda36-f009-47c2-8939-73ec23da0d4c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "128eda36-f009-47c2-8939-73ec23da0d4c" (UID: "128eda36-f009-47c2-8939-73ec23da0d4c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.660833 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "128eda36-f009-47c2-8939-73ec23da0d4c" (UID: "128eda36-f009-47c2-8939-73ec23da0d4c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.672900 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-scripts" (OuterVolumeSpecName: "scripts") pod "128eda36-f009-47c2-8939-73ec23da0d4c" (UID: "128eda36-f009-47c2-8939-73ec23da0d4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.679566 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128eda36-f009-47c2-8939-73ec23da0d4c-kube-api-access-29fsh" (OuterVolumeSpecName: "kube-api-access-29fsh") pod "128eda36-f009-47c2-8939-73ec23da0d4c" (UID: "128eda36-f009-47c2-8939-73ec23da0d4c"). InnerVolumeSpecName "kube-api-access-29fsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.748837 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29fsh\" (UniqueName: \"kubernetes.io/projected/128eda36-f009-47c2-8939-73ec23da0d4c-kube-api-access-29fsh\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.749116 4767 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.749187 4767 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/128eda36-f009-47c2-8939-73ec23da0d4c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.749238 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.784715 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.806355 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "128eda36-f009-47c2-8939-73ec23da0d4c" (UID: "128eda36-f009-47c2-8939-73ec23da0d4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.850751 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-custom-prometheus-ca\") pod \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.850800 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t94wj\" (UniqueName: \"kubernetes.io/projected/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-kube-api-access-t94wj\") pod \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.850839 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-combined-ca-bundle\") pod \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.850954 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-logs\") pod \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.852288 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-config-data\") pod \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\" (UID: \"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb\") " Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.853030 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.868884 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-logs" (OuterVolumeSpecName: "logs") pod "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" (UID: "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.870454 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-kube-api-access-t94wj" (OuterVolumeSpecName: "kube-api-access-t94wj") pod "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" (UID: "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb"). InnerVolumeSpecName "kube-api-access-t94wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: E1124 21:56:35.885843 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.887839 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-config-data" (OuterVolumeSpecName: "config-data") pod "128eda36-f009-47c2-8939-73ec23da0d4c" (UID: "128eda36-f009-47c2-8939-73ec23da0d4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.931428 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" (UID: "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.931886 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" (UID: "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.981204 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/128eda36-f009-47c2-8939-73ec23da0d4c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.981231 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.981247 4767 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.981257 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t94wj\" (UniqueName: \"kubernetes.io/projected/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-kube-api-access-t94wj\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:35 crc kubenswrapper[4767]: I1124 21:56:35.981277 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.009151 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-config-data" (OuterVolumeSpecName: "config-data") pod "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" (UID: "5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.086552 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.141617 4767 generic.go:334] "Generic (PLEG): container finished" podID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerID="b7e62785ab79523ea265bb9f0ab2bb098e95a50d92a88879ac58a4b1c3bb9433" exitCode=0 Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.141681 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" event={"ID":"d949a6f4-9d83-42c5-b4df-e79178848c5f","Type":"ContainerDied","Data":"b7e62785ab79523ea265bb9f0ab2bb098e95a50d92a88879ac58a4b1c3bb9433"} Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.153653 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb","Type":"ContainerDied","Data":"4a2baf73deceda8af17b2640d66cc6019a750195a2adcc4f1f3be06631c44085"} Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.153707 4767 scope.go:117] "RemoveContainer" containerID="b94edabda1034bf0bd7f22ca6890896c09d5b2b7477fb6b5ea59044cde213744" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.154081 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.178625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerStarted","Data":"e87e2d9f61853a2d2351e3b3fa8ea1641378a07e3be2f3979b5bc1244559ec92"} Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.178804 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="ceilometer-notification-agent" containerID="cri-o://fb6fbe7605b29465a47f88ba3630d4e3a7ea3d9849c6e90239ccaf7407709025" gracePeriod=30 Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.179155 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="proxy-httpd" containerID="cri-o://e87e2d9f61853a2d2351e3b3fa8ea1641378a07e3be2f3979b5bc1244559ec92" gracePeriod=30 Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.179193 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.179230 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="sg-core" containerID="cri-o://7939c13920e5f49beeeaa6b27de898ca5f5a9be94aaab406ff160abeaab191c6" gracePeriod=30 Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.204238 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tzcqj" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.204356 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tzcqj" event={"ID":"128eda36-f009-47c2-8939-73ec23da0d4c","Type":"ContainerDied","Data":"7165f9d5c3e3786af10718f074fbc95eaf35f2740f343dbf58ad7e04dda46879"} Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.204387 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7165f9d5c3e3786af10718f074fbc95eaf35f2740f343dbf58ad7e04dda46879" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.216435 4767 scope.go:117] "RemoveContainer" containerID="eed7c5396bda53207c76f488a9d06177b3be055e486a84e1090102f0579a4531" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.269833 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.306776 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.348084 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" path="/var/lib/kubelet/pods/5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb/volumes" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.349014 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:36 crc kubenswrapper[4767]: E1124 21:56:36.349316 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128eda36-f009-47c2-8939-73ec23da0d4c" containerName="cinder-db-sync" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.349326 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="128eda36-f009-47c2-8939-73ec23da0d4c" containerName="cinder-db-sync" Nov 24 21:56:36 crc kubenswrapper[4767]: E1124 21:56:36.349346 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api-log" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.349353 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api-log" Nov 24 21:56:36 crc kubenswrapper[4767]: E1124 21:56:36.349363 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.349369 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.349561 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="128eda36-f009-47c2-8939-73ec23da0d4c" containerName="cinder-db-sync" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.349577 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.349596 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1b27c4-98d7-4f42-8c86-3ad108b0bcfb" containerName="watcher-api-log" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.365212 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.365324 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.368726 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.369004 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.369113 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.409654 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.498670 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-nb\") pod \"d949a6f4-9d83-42c5-b4df-e79178848c5f\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.498879 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-sb\") pod \"d949a6f4-9d83-42c5-b4df-e79178848c5f\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.498923 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-config\") pod \"d949a6f4-9d83-42c5-b4df-e79178848c5f\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.498988 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-swift-storage-0\") pod \"d949a6f4-9d83-42c5-b4df-e79178848c5f\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499133 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8pzp\" (UniqueName: \"kubernetes.io/projected/d949a6f4-9d83-42c5-b4df-e79178848c5f-kube-api-access-k8pzp\") pod \"d949a6f4-9d83-42c5-b4df-e79178848c5f\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499222 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-svc\") pod \"d949a6f4-9d83-42c5-b4df-e79178848c5f\" (UID: \"d949a6f4-9d83-42c5-b4df-e79178848c5f\") " Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499498 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499551 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-public-tls-certs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499577 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499596 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-config-data\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499619 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d92504-eb02-4711-a860-bed97da288e0-logs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499932 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.499998 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9n6l\" (UniqueName: \"kubernetes.io/projected/19d92504-eb02-4711-a860-bed97da288e0-kube-api-access-w9n6l\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.536447 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d949a6f4-9d83-42c5-b4df-e79178848c5f-kube-api-access-k8pzp" (OuterVolumeSpecName: "kube-api-access-k8pzp") pod "d949a6f4-9d83-42c5-b4df-e79178848c5f" (UID: "d949a6f4-9d83-42c5-b4df-e79178848c5f"). InnerVolumeSpecName "kube-api-access-k8pzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604571 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-public-tls-certs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604619 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604641 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-config-data\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604661 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d92504-eb02-4711-a860-bed97da288e0-logs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604704 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604754 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9n6l\" (UniqueName: \"kubernetes.io/projected/19d92504-eb02-4711-a860-bed97da288e0-kube-api-access-w9n6l\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604842 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.604895 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8pzp\" (UniqueName: \"kubernetes.io/projected/d949a6f4-9d83-42c5-b4df-e79178848c5f-kube-api-access-k8pzp\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.609024 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d92504-eb02-4711-a860-bed97da288e0-logs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.611118 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-public-tls-certs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.611296 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.623061 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-config-data\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.623121 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.623412 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d92504-eb02-4711-a860-bed97da288e0-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.623765 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d949a6f4-9d83-42c5-b4df-e79178848c5f" (UID: "d949a6f4-9d83-42c5-b4df-e79178848c5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.641529 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d949a6f4-9d83-42c5-b4df-e79178848c5f" (UID: "d949a6f4-9d83-42c5-b4df-e79178848c5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.661010 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9n6l\" (UniqueName: \"kubernetes.io/projected/19d92504-eb02-4711-a860-bed97da288e0-kube-api-access-w9n6l\") pod \"watcher-api-0\" (UID: \"19d92504-eb02-4711-a860-bed97da288e0\") " pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.679550 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d949a6f4-9d83-42c5-b4df-e79178848c5f" (UID: "d949a6f4-9d83-42c5-b4df-e79178848c5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.701754 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d949a6f4-9d83-42c5-b4df-e79178848c5f" (UID: "d949a6f4-9d83-42c5-b4df-e79178848c5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.707410 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.707599 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.707682 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.707748 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.752900 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.760809 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-config" (OuterVolumeSpecName: "config") pod "d949a6f4-9d83-42c5-b4df-e79178848c5f" (UID: "d949a6f4-9d83-42c5-b4df-e79178848c5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.809378 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d949a6f4-9d83-42c5-b4df-e79178848c5f-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.809722 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:36 crc kubenswrapper[4767]: E1124 21:56:36.810158 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerName="dnsmasq-dns" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.810175 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerName="dnsmasq-dns" Nov 24 21:56:36 crc kubenswrapper[4767]: E1124 21:56:36.810210 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerName="init" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.810217 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerName="init" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.810399 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d949a6f4-9d83-42c5-b4df-e79178848c5f" containerName="dnsmasq-dns" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.811387 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.822344 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.822593 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.822694 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7kxpd" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.822803 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.858390 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.893322 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d7gmk"] Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.896627 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.912283 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.912339 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-scripts\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.912417 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.912457 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08dd7085-e79f-45d5-88f5-434f4d41552e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.912505 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.912526 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdrbn\" (UniqueName: \"kubernetes.io/projected/08dd7085-e79f-45d5-88f5-434f4d41552e-kube-api-access-jdrbn\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:36 crc kubenswrapper[4767]: I1124 21:56:36.925633 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d7gmk"] Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016594 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016645 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdrbn\" (UniqueName: \"kubernetes.io/projected/08dd7085-e79f-45d5-88f5-434f4d41552e-kube-api-access-jdrbn\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016671 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016706 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016751 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016777 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016803 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-scripts\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016825 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016846 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-config\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016891 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016922 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08dd7085-e79f-45d5-88f5-434f4d41552e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.016941 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krh6\" (UniqueName: \"kubernetes.io/projected/0caee68e-529a-4a72-95af-fda2e98e230b-kube-api-access-7krh6\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.025382 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08dd7085-e79f-45d5-88f5-434f4d41552e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.025415 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.029010 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.031465 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.032387 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.033611 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-scripts\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.034707 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.035982 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.049649 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.054362 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdrbn\" (UniqueName: \"kubernetes.io/projected/08dd7085-e79f-45d5-88f5-434f4d41552e-kube-api-access-jdrbn\") pod \"cinder-scheduler-0\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119356 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119407 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-scripts\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119438 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119481 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119498 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-config\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119514 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e8254e-3260-46da-b633-a86bacc64ea2-logs\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119536 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119566 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmmvt\" (UniqueName: \"kubernetes.io/projected/d4e8254e-3260-46da-b633-a86bacc64ea2-kube-api-access-qmmvt\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119603 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krh6\" (UniqueName: \"kubernetes.io/projected/0caee68e-529a-4a72-95af-fda2e98e230b-kube-api-access-7krh6\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119627 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data-custom\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119659 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119681 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4e8254e-3260-46da-b633-a86bacc64ea2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.119704 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.120530 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.121760 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.122806 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.122960 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-config\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.125437 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.140688 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krh6\" (UniqueName: \"kubernetes.io/projected/0caee68e-529a-4a72-95af-fda2e98e230b-kube-api-access-7krh6\") pod \"dnsmasq-dns-5c9776ccc5-d7gmk\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.190808 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.221388 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.221445 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmmvt\" (UniqueName: \"kubernetes.io/projected/d4e8254e-3260-46da-b633-a86bacc64ea2-kube-api-access-qmmvt\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.221496 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data-custom\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.221534 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4e8254e-3260-46da-b633-a86bacc64ea2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.221571 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.221590 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-scripts\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.221639 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e8254e-3260-46da-b633-a86bacc64ea2-logs\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.222053 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e8254e-3260-46da-b633-a86bacc64ea2-logs\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.223840 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4e8254e-3260-46da-b633-a86bacc64ea2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.228702 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-scripts\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.231667 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.235619 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.242905 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.260052 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmmvt\" (UniqueName: \"kubernetes.io/projected/d4e8254e-3260-46da-b633-a86bacc64ea2-kube-api-access-qmmvt\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.262946 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data-custom\") pod \"cinder-api-0\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.294386 4767 generic.go:334] "Generic (PLEG): container finished" podID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerID="e87e2d9f61853a2d2351e3b3fa8ea1641378a07e3be2f3979b5bc1244559ec92" exitCode=0 Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.294504 4767 generic.go:334] "Generic (PLEG): container finished" podID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerID="7939c13920e5f49beeeaa6b27de898ca5f5a9be94aaab406ff160abeaab191c6" exitCode=2 Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.294630 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerDied","Data":"e87e2d9f61853a2d2351e3b3fa8ea1641378a07e3be2f3979b5bc1244559ec92"} Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.294723 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerDied","Data":"7939c13920e5f49beeeaa6b27de898ca5f5a9be94aaab406ff160abeaab191c6"} Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.309682 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" event={"ID":"d949a6f4-9d83-42c5-b4df-e79178848c5f","Type":"ContainerDied","Data":"bf5164fd3114ffe24fa0abff2563522802366ecee98342f12f3ccef4c1d8c8dc"} Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.309887 4767 scope.go:117] "RemoveContainer" containerID="b7e62785ab79523ea265bb9f0ab2bb098e95a50d92a88879ac58a4b1c3bb9433" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.310062 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vkcs7" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.378911 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.418798 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.486400 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vkcs7"] Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.516033 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vkcs7"] Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.516147 4767 scope.go:117] "RemoveContainer" containerID="fdc7a34cf2233f9a0801c8a6ce3130d6fa50650cf051f8a63ac050fd4730a94d" Nov 24 21:56:37 crc kubenswrapper[4767]: I1124 21:56:37.861674 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.088598 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.093651 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d7gmk"] Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.114057 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.422006 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d949a6f4-9d83-42c5-b4df-e79178848c5f" path="/var/lib/kubelet/pods/d949a6f4-9d83-42c5-b4df-e79178848c5f/volumes" Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.457484 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19d92504-eb02-4711-a860-bed97da288e0","Type":"ContainerStarted","Data":"dbbe5672cba9dfa1d883c17c4527331a21d4a9d87cf4e6422aae033e7dd0b881"} Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.457530 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19d92504-eb02-4711-a860-bed97da288e0","Type":"ContainerStarted","Data":"bf8ced02c4f9c0dade545d4ab7ee9729673a9425fb3b137c50a208d68c70471c"} Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.470160 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4e8254e-3260-46da-b633-a86bacc64ea2","Type":"ContainerStarted","Data":"974e46ca261fcfe1b5c18f0901b21c2ba9fbe56e2687840d736447205e58f6ed"} Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.486441 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08dd7085-e79f-45d5-88f5-434f4d41552e","Type":"ContainerStarted","Data":"022185f0e3f081cfc064185b0e0877b5b403898659a0d37e85e9351062c1493b"} Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.499511 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" event={"ID":"0caee68e-529a-4a72-95af-fda2e98e230b","Type":"ContainerStarted","Data":"a722988826fbfaef25f2e43e1cce4b29d7a9c26b324e6ef4b10885d3dc925bbe"} Nov 24 21:56:38 crc kubenswrapper[4767]: I1124 21:56:38.816925 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.055624 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.559071 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19d92504-eb02-4711-a860-bed97da288e0","Type":"ContainerStarted","Data":"a44ab0bcba49af6cc1f112af18670e18139475a37f599b91c68330d6fde0960d"} Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.560377 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.583915 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.583893473 podStartE2EDuration="3.583893473s" podCreationTimestamp="2025-11-24 21:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:39.581081013 +0000 UTC m=+1082.498064385" watchObservedRunningTime="2025-11-24 21:56:39.583893473 +0000 UTC m=+1082.500876845" Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.605511 4767 generic.go:334] "Generic (PLEG): container finished" podID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerID="fb6fbe7605b29465a47f88ba3630d4e3a7ea3d9849c6e90239ccaf7407709025" exitCode=0 Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.605613 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerDied","Data":"fb6fbe7605b29465a47f88ba3630d4e3a7ea3d9849c6e90239ccaf7407709025"} Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.612222 4767 generic.go:334] "Generic (PLEG): container finished" podID="0caee68e-529a-4a72-95af-fda2e98e230b" containerID="c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf" exitCode=0 Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.612395 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" event={"ID":"0caee68e-529a-4a72-95af-fda2e98e230b","Type":"ContainerDied","Data":"c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf"} Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.623351 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4e8254e-3260-46da-b633-a86bacc64ea2","Type":"ContainerStarted","Data":"fb2e1cac4f1b51fc87973ad0d5819cdeb2b226eadcdb476e13469aa671999581"} Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.860351 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:39 crc kubenswrapper[4767]: I1124 21:56:39.860463 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.075320 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.156325 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.333952 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkxc9\" (UniqueName: \"kubernetes.io/projected/d7a9ba0d-f67a-4887-82d8-3135cf56098a-kube-api-access-lkxc9\") pod \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.334002 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-run-httpd\") pod \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.334075 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-config-data\") pod \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.334154 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-sg-core-conf-yaml\") pod \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.334178 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-log-httpd\") pod \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.334204 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-scripts\") pod \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.334289 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-combined-ca-bundle\") pod \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\" (UID: \"d7a9ba0d-f67a-4887-82d8-3135cf56098a\") " Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.338760 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7a9ba0d-f67a-4887-82d8-3135cf56098a" (UID: "d7a9ba0d-f67a-4887-82d8-3135cf56098a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.339053 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7a9ba0d-f67a-4887-82d8-3135cf56098a" (UID: "d7a9ba0d-f67a-4887-82d8-3135cf56098a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.355591 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a9ba0d-f67a-4887-82d8-3135cf56098a-kube-api-access-lkxc9" (OuterVolumeSpecName: "kube-api-access-lkxc9") pod "d7a9ba0d-f67a-4887-82d8-3135cf56098a" (UID: "d7a9ba0d-f67a-4887-82d8-3135cf56098a"). InnerVolumeSpecName "kube-api-access-lkxc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.365329 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-scripts" (OuterVolumeSpecName: "scripts") pod "d7a9ba0d-f67a-4887-82d8-3135cf56098a" (UID: "d7a9ba0d-f67a-4887-82d8-3135cf56098a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.410410 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7a9ba0d-f67a-4887-82d8-3135cf56098a" (UID: "d7a9ba0d-f67a-4887-82d8-3135cf56098a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.436395 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkxc9\" (UniqueName: \"kubernetes.io/projected/d7a9ba0d-f67a-4887-82d8-3135cf56098a-kube-api-access-lkxc9\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.436422 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.436431 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.436439 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a9ba0d-f67a-4887-82d8-3135cf56098a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.436447 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.585584 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7a9ba0d-f67a-4887-82d8-3135cf56098a" (UID: "d7a9ba0d-f67a-4887-82d8-3135cf56098a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.626477 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-config-data" (OuterVolumeSpecName: "config-data") pod "d7a9ba0d-f67a-4887-82d8-3135cf56098a" (UID: "d7a9ba0d-f67a-4887-82d8-3135cf56098a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.643590 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.650971 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.651012 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a9ba0d-f67a-4887-82d8-3135cf56098a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.710547 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" podStartSLOduration=4.710527389 podStartE2EDuration="4.710527389s" podCreationTimestamp="2025-11-24 21:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:40.708408069 +0000 UTC m=+1083.625391441" watchObservedRunningTime="2025-11-24 21:56:40.710527389 +0000 UTC m=+1083.627510761" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.725930 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.725966 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a9ba0d-f67a-4887-82d8-3135cf56098a","Type":"ContainerDied","Data":"47c30100f32fbbd9eafec906d9f57abb1c34b0e69fa284bd02cf9ce91bb2b03f"} Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.725988 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" event={"ID":"0caee68e-529a-4a72-95af-fda2e98e230b","Type":"ContainerStarted","Data":"9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50"} Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.726007 4767 scope.go:117] "RemoveContainer" containerID="e87e2d9f61853a2d2351e3b3fa8ea1641378a07e3be2f3979b5bc1244559ec92" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.817794 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-567c96d68-4rmbm" podUID="f3a751ba-fb23-4cd3-a1f7-2c843e04ab47" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.820324 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.833541 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.841763 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:56:40 crc kubenswrapper[4767]: E1124 21:56:40.842201 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="proxy-httpd" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.842215 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="proxy-httpd" Nov 24 21:56:40 crc kubenswrapper[4767]: E1124 21:56:40.842254 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="ceilometer-notification-agent" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.842260 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="ceilometer-notification-agent" Nov 24 21:56:40 crc kubenswrapper[4767]: E1124 21:56:40.842295 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="sg-core" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.842301 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="sg-core" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.842520 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="proxy-httpd" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.842539 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="ceilometer-notification-agent" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.842560 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" containerName="sg-core" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.844444 4767 scope.go:117] "RemoveContainer" containerID="7939c13920e5f49beeeaa6b27de898ca5f5a9be94aaab406ff160abeaab191c6" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.847012 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.848530 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.856681 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.856897 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.918209 4767 scope.go:117] "RemoveContainer" containerID="fb6fbe7605b29465a47f88ba3630d4e3a7ea3d9849c6e90239ccaf7407709025" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.963282 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-scripts\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.963338 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-log-httpd\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.963362 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-run-httpd\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.963389 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.963433 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-config-data\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.963502 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:40 crc kubenswrapper[4767]: I1124 21:56:40.963555 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677sg\" (UniqueName: \"kubernetes.io/projected/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-kube-api-access-677sg\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.064721 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-677sg\" (UniqueName: \"kubernetes.io/projected/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-kube-api-access-677sg\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.064783 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-scripts\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.064814 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-log-httpd\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.064840 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-run-httpd\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.064874 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.064903 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-config-data\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.064970 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.066186 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-log-httpd\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.068114 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-run-httpd\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.072375 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-config-data\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.074922 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.081063 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-scripts\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.090982 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-677sg\" (UniqueName: \"kubernetes.io/projected/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-kube-api-access-677sg\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.094993 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.200760 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.674703 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.708492 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4e8254e-3260-46da-b633-a86bacc64ea2","Type":"ContainerStarted","Data":"297a33e19d485c3416d016d3124410d90109adebf8bebbd6d7e096327223c6bb"} Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.708653 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api-log" containerID="cri-o://fb2e1cac4f1b51fc87973ad0d5819cdeb2b226eadcdb476e13469aa671999581" gracePeriod=30 Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.708901 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.709125 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api" containerID="cri-o://297a33e19d485c3416d016d3124410d90109adebf8bebbd6d7e096327223c6bb" gracePeriod=30 Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.718133 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08dd7085-e79f-45d5-88f5-434f4d41552e","Type":"ContainerStarted","Data":"e64f29c9d37a9518e1434010a80ad333e65c0512bb36912cdd66290b10caf390"} Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.734010 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.7339928780000005 podStartE2EDuration="5.733992878s" podCreationTimestamp="2025-11-24 21:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:41.730284513 +0000 UTC m=+1084.647267895" watchObservedRunningTime="2025-11-24 21:56:41.733992878 +0000 UTC m=+1084.650976250" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.756828 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.756905 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.970243 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:56:41 crc kubenswrapper[4767]: I1124 21:56:41.982945 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9d666dcfd-kpjw6" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.039073 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69dfb67c9d-pwwx6"] Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.039317 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69dfb67c9d-pwwx6" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api-log" containerID="cri-o://a6b69d1ac3731e59f6c52d083d504bb76c12faaca5751dbde17d0f3d0a2caf1e" gracePeriod=30 Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.039408 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69dfb67c9d-pwwx6" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api" containerID="cri-o://da6de2ed159121426438867b09047e89a7cbff50cfd1ce0aeb95313b04dda7e0" gracePeriod=30 Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.050414 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69dfb67c9d-pwwx6" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": EOF" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.352795 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a9ba0d-f67a-4887-82d8-3135cf56098a" path="/var/lib/kubelet/pods/d7a9ba0d-f67a-4887-82d8-3135cf56098a/volumes" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.724077 4767 generic.go:334] "Generic (PLEG): container finished" podID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerID="a6b69d1ac3731e59f6c52d083d504bb76c12faaca5751dbde17d0f3d0a2caf1e" exitCode=143 Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.724301 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dfb67c9d-pwwx6" event={"ID":"40072229-e5be-485f-82d8-7e8c17e2c8c3","Type":"ContainerDied","Data":"a6b69d1ac3731e59f6c52d083d504bb76c12faaca5751dbde17d0f3d0a2caf1e"} Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.726888 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerStarted","Data":"fd5d73a94b6669d533e390bc6f9f47c73429d64a2d98fcc13a5011a2a5133848"} Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.728866 4767 generic.go:334] "Generic (PLEG): container finished" podID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerID="297a33e19d485c3416d016d3124410d90109adebf8bebbd6d7e096327223c6bb" exitCode=0 Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.728887 4767 generic.go:334] "Generic (PLEG): container finished" podID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerID="fb2e1cac4f1b51fc87973ad0d5819cdeb2b226eadcdb476e13469aa671999581" exitCode=143 Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.728929 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4e8254e-3260-46da-b633-a86bacc64ea2","Type":"ContainerDied","Data":"297a33e19d485c3416d016d3124410d90109adebf8bebbd6d7e096327223c6bb"} Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.728945 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4e8254e-3260-46da-b633-a86bacc64ea2","Type":"ContainerDied","Data":"fb2e1cac4f1b51fc87973ad0d5819cdeb2b226eadcdb476e13469aa671999581"} Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.729249 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4e8254e-3260-46da-b633-a86bacc64ea2","Type":"ContainerDied","Data":"974e46ca261fcfe1b5c18f0901b21c2ba9fbe56e2687840d736447205e58f6ed"} Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.729284 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974e46ca261fcfe1b5c18f0901b21c2ba9fbe56e2687840d736447205e58f6ed" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.731293 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08dd7085-e79f-45d5-88f5-434f4d41552e","Type":"ContainerStarted","Data":"ab9d971750db19c3a8eb23dab6acc7884b7f0acd3755af4495b60efc23ea3f39"} Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.732110 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.759949 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.486599664 podStartE2EDuration="6.759934967s" podCreationTimestamp="2025-11-24 21:56:36 +0000 UTC" firstStartedPulling="2025-11-24 21:56:37.866560764 +0000 UTC m=+1080.783544136" lastFinishedPulling="2025-11-24 21:56:39.139896067 +0000 UTC m=+1082.056879439" observedRunningTime="2025-11-24 21:56:42.752969939 +0000 UTC m=+1085.669953301" watchObservedRunningTime="2025-11-24 21:56:42.759934967 +0000 UTC m=+1085.676918339" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.916247 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data\") pod \"d4e8254e-3260-46da-b633-a86bacc64ea2\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.916322 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data-custom\") pod \"d4e8254e-3260-46da-b633-a86bacc64ea2\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.916367 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-scripts\") pod \"d4e8254e-3260-46da-b633-a86bacc64ea2\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.916396 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmmvt\" (UniqueName: \"kubernetes.io/projected/d4e8254e-3260-46da-b633-a86bacc64ea2-kube-api-access-qmmvt\") pod \"d4e8254e-3260-46da-b633-a86bacc64ea2\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.916444 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4e8254e-3260-46da-b633-a86bacc64ea2-etc-machine-id\") pod \"d4e8254e-3260-46da-b633-a86bacc64ea2\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.916482 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e8254e-3260-46da-b633-a86bacc64ea2-logs\") pod \"d4e8254e-3260-46da-b633-a86bacc64ea2\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.916514 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-combined-ca-bundle\") pod \"d4e8254e-3260-46da-b633-a86bacc64ea2\" (UID: \"d4e8254e-3260-46da-b633-a86bacc64ea2\") " Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.918525 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e8254e-3260-46da-b633-a86bacc64ea2-logs" (OuterVolumeSpecName: "logs") pod "d4e8254e-3260-46da-b633-a86bacc64ea2" (UID: "d4e8254e-3260-46da-b633-a86bacc64ea2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.925706 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4e8254e-3260-46da-b633-a86bacc64ea2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d4e8254e-3260-46da-b633-a86bacc64ea2" (UID: "d4e8254e-3260-46da-b633-a86bacc64ea2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.926152 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d4e8254e-3260-46da-b633-a86bacc64ea2" (UID: "d4e8254e-3260-46da-b633-a86bacc64ea2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.928404 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e8254e-3260-46da-b633-a86bacc64ea2-kube-api-access-qmmvt" (OuterVolumeSpecName: "kube-api-access-qmmvt") pod "d4e8254e-3260-46da-b633-a86bacc64ea2" (UID: "d4e8254e-3260-46da-b633-a86bacc64ea2"). InnerVolumeSpecName "kube-api-access-qmmvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.937393 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-scripts" (OuterVolumeSpecName: "scripts") pod "d4e8254e-3260-46da-b633-a86bacc64ea2" (UID: "d4e8254e-3260-46da-b633-a86bacc64ea2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.968146 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4e8254e-3260-46da-b633-a86bacc64ea2" (UID: "d4e8254e-3260-46da-b633-a86bacc64ea2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:42 crc kubenswrapper[4767]: I1124 21:56:42.988369 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data" (OuterVolumeSpecName: "config-data") pod "d4e8254e-3260-46da-b633-a86bacc64ea2" (UID: "d4e8254e-3260-46da-b633-a86bacc64ea2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.018511 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.018539 4767 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.018551 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.018559 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmmvt\" (UniqueName: \"kubernetes.io/projected/d4e8254e-3260-46da-b633-a86bacc64ea2-kube-api-access-qmmvt\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.018568 4767 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4e8254e-3260-46da-b633-a86bacc64ea2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.018578 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e8254e-3260-46da-b633-a86bacc64ea2-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.018589 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e8254e-3260-46da-b633-a86bacc64ea2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.182143 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.457836 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.739730 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.748150 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerStarted","Data":"98181de5e64e677d4a241f5b9903f93c3c3f57a2dec35428e061aaeab6d7531a"} Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.748188 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerStarted","Data":"b0ef98cf2fc8d882dee8212f09f34de17854cc792cf97b9331c014e97f75ef5a"} Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.771934 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.781889 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.804912 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:43 crc kubenswrapper[4767]: E1124 21:56:43.814523 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.814577 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api" Nov 24 21:56:43 crc kubenswrapper[4767]: E1124 21:56:43.814627 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api-log" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.814634 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api-log" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.814973 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api-log" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.814991 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" containerName="cinder-api" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.815958 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.823757 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.823952 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.824079 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.828310 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.956613 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-config-data\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.956673 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-scripts\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.956831 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.957022 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.957119 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-logs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.957175 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpjc\" (UniqueName: \"kubernetes.io/projected/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-kube-api-access-6zpjc\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.957242 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-config-data-custom\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.957363 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:43 crc kubenswrapper[4767]: I1124 21:56:43.957483 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059521 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059593 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-config-data\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059642 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-scripts\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059668 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059733 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059774 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-logs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059800 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zpjc\" (UniqueName: \"kubernetes.io/projected/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-kube-api-access-6zpjc\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059836 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-config-data-custom\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059908 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.059831 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.060494 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-logs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.064532 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.066148 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-config-data-custom\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.066565 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.066646 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-scripts\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.067242 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-config-data\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.075707 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.080031 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zpjc\" (UniqueName: \"kubernetes.io/projected/a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11-kube-api-access-6zpjc\") pod \"cinder-api-0\" (UID: \"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11\") " pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.106512 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.136527 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.330626 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e8254e-3260-46da-b633-a86bacc64ea2" path="/var/lib/kubelet/pods/d4e8254e-3260-46da-b633-a86bacc64ea2/volumes" Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.591494 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:56:44 crc kubenswrapper[4767]: W1124 21:56:44.595150 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda667ac0d_ac24_4ba9_ac1c_e35e32fa1c11.slice/crio-d3ce005cc3b7fcb41f295aa909caa58b9b5ffe54b1afbc21c75bb6d3d66e8895 WatchSource:0}: Error finding container d3ce005cc3b7fcb41f295aa909caa58b9b5ffe54b1afbc21c75bb6d3d66e8895: Status 404 returned error can't find the container with id d3ce005cc3b7fcb41f295aa909caa58b9b5ffe54b1afbc21c75bb6d3d66e8895 Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.750333 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerStarted","Data":"dbbcad4cccb9f458e64b46aa4d3caad203a310ea83d9d3ac4d06c956d4aeeaab"} Nov 24 21:56:44 crc kubenswrapper[4767]: I1124 21:56:44.751494 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11","Type":"ContainerStarted","Data":"d3ce005cc3b7fcb41f295aa909caa58b9b5ffe54b1afbc21c75bb6d3d66e8895"} Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.496752 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69dfb67c9d-pwwx6" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": read tcp 10.217.0.2:55096->10.217.0.178:9311: read: connection reset by peer" Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.580594 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69dfb67c9d-pwwx6" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.580634 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69dfb67c9d-pwwx6" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.581782 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.777949 4767 generic.go:334] "Generic (PLEG): container finished" podID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerID="da6de2ed159121426438867b09047e89a7cbff50cfd1ce0aeb95313b04dda7e0" exitCode=0 Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.778031 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dfb67c9d-pwwx6" event={"ID":"40072229-e5be-485f-82d8-7e8c17e2c8c3","Type":"ContainerDied","Data":"da6de2ed159121426438867b09047e89a7cbff50cfd1ce0aeb95313b04dda7e0"} Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.779836 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11","Type":"ContainerStarted","Data":"eb7be6e35dcb050e77b816cd9ef89dac9442d3a0fe944e763da1eb5759a751c2"} Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.952176 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:45 crc kubenswrapper[4767]: I1124 21:56:45.975398 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78c4646f4f-mnjlq" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.034634 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b77df9bd4-5cckf"] Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.035209 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b77df9bd4-5cckf" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-api" containerID="cri-o://c44ade794211c80693ef9ffb3fa8abc892c908856529cc35ab5d29e360efefec" gracePeriod=30 Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.035375 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b77df9bd4-5cckf" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-httpd" containerID="cri-o://0bbbb7007afab2707a502a42f9b0af7b254e8ef485eeacb17d1f6f3f86a3b416" gracePeriod=30 Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.104039 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data-custom\") pod \"40072229-e5be-485f-82d8-7e8c17e2c8c3\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.104163 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data\") pod \"40072229-e5be-485f-82d8-7e8c17e2c8c3\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.104198 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzv7x\" (UniqueName: \"kubernetes.io/projected/40072229-e5be-485f-82d8-7e8c17e2c8c3-kube-api-access-fzv7x\") pod \"40072229-e5be-485f-82d8-7e8c17e2c8c3\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.104286 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40072229-e5be-485f-82d8-7e8c17e2c8c3-logs\") pod \"40072229-e5be-485f-82d8-7e8c17e2c8c3\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.104326 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-combined-ca-bundle\") pod \"40072229-e5be-485f-82d8-7e8c17e2c8c3\" (UID: \"40072229-e5be-485f-82d8-7e8c17e2c8c3\") " Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.105556 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40072229-e5be-485f-82d8-7e8c17e2c8c3-logs" (OuterVolumeSpecName: "logs") pod "40072229-e5be-485f-82d8-7e8c17e2c8c3" (UID: "40072229-e5be-485f-82d8-7e8c17e2c8c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.113131 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40072229-e5be-485f-82d8-7e8c17e2c8c3-kube-api-access-fzv7x" (OuterVolumeSpecName: "kube-api-access-fzv7x") pod "40072229-e5be-485f-82d8-7e8c17e2c8c3" (UID: "40072229-e5be-485f-82d8-7e8c17e2c8c3"). InnerVolumeSpecName "kube-api-access-fzv7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.115445 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "40072229-e5be-485f-82d8-7e8c17e2c8c3" (UID: "40072229-e5be-485f-82d8-7e8c17e2c8c3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.162476 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40072229-e5be-485f-82d8-7e8c17e2c8c3" (UID: "40072229-e5be-485f-82d8-7e8c17e2c8c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.183458 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data" (OuterVolumeSpecName: "config-data") pod "40072229-e5be-485f-82d8-7e8c17e2c8c3" (UID: "40072229-e5be-485f-82d8-7e8c17e2c8c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.206860 4767 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.206903 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.206942 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzv7x\" (UniqueName: \"kubernetes.io/projected/40072229-e5be-485f-82d8-7e8c17e2c8c3-kube-api-access-fzv7x\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.206958 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40072229-e5be-485f-82d8-7e8c17e2c8c3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.206969 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40072229-e5be-485f-82d8-7e8c17e2c8c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.578802 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.753673 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.762134 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.788149 4767 generic.go:334] "Generic (PLEG): container finished" podID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerID="0bbbb7007afab2707a502a42f9b0af7b254e8ef485eeacb17d1f6f3f86a3b416" exitCode=0 Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.788212 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b77df9bd4-5cckf" event={"ID":"72d913d0-e2e2-4c49-9775-e16826ebcb2e","Type":"ContainerDied","Data":"0bbbb7007afab2707a502a42f9b0af7b254e8ef485eeacb17d1f6f3f86a3b416"} Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.790071 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69dfb67c9d-pwwx6" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.790060 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dfb67c9d-pwwx6" event={"ID":"40072229-e5be-485f-82d8-7e8c17e2c8c3","Type":"ContainerDied","Data":"b93cc17fc18f51f3dec8a62cf8a296f2448387656e4028663565846c543744bb"} Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.790229 4767 scope.go:117] "RemoveContainer" containerID="da6de2ed159121426438867b09047e89a7cbff50cfd1ce0aeb95313b04dda7e0" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.793543 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerStarted","Data":"bb90901e2eb65bf6e76732aeceab63546c1b26e3d1bf0ca30a39ea46be16befe"} Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.793688 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.797384 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11","Type":"ContainerStarted","Data":"289c318aeb7a5ee7e2965cf079150160bb570fb4d6ef05eb2a7660dfa54275e4"} Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.820293 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.824542 4767 scope.go:117] "RemoveContainer" containerID="a6b69d1ac3731e59f6c52d083d504bb76c12faaca5751dbde17d0f3d0a2caf1e" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.824985 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.305119703 podStartE2EDuration="6.824944131s" podCreationTimestamp="2025-11-24 21:56:40 +0000 UTC" firstStartedPulling="2025-11-24 21:56:41.975371271 +0000 UTC m=+1084.892354643" lastFinishedPulling="2025-11-24 21:56:46.495195699 +0000 UTC m=+1089.412179071" observedRunningTime="2025-11-24 21:56:46.812781926 +0000 UTC m=+1089.729765308" watchObservedRunningTime="2025-11-24 21:56:46.824944131 +0000 UTC m=+1089.741927503" Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.838493 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69dfb67c9d-pwwx6"] Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.845928 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-69dfb67c9d-pwwx6"] Nov 24 21:56:46 crc kubenswrapper[4767]: I1124 21:56:46.862361 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.8623400329999997 podStartE2EDuration="3.862340033s" podCreationTimestamp="2025-11-24 21:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:46.852806482 +0000 UTC m=+1089.769789854" watchObservedRunningTime="2025-11-24 21:56:46.862340033 +0000 UTC m=+1089.779323405" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.192764 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.233450 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.319248 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bzgpq"] Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.319789 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" podUID="fa66113b-5836-4a14-be15-8f2ef6093310" containerName="dnsmasq-dns" containerID="cri-o://efa627a463412baccc8a672fb208753e727137216839867333d72081681dd5b1" gracePeriod=10 Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.436948 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.810748 4767 generic.go:334] "Generic (PLEG): container finished" podID="fa66113b-5836-4a14-be15-8f2ef6093310" containerID="efa627a463412baccc8a672fb208753e727137216839867333d72081681dd5b1" exitCode=0 Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.810807 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" event={"ID":"fa66113b-5836-4a14-be15-8f2ef6093310","Type":"ContainerDied","Data":"efa627a463412baccc8a672fb208753e727137216839867333d72081681dd5b1"} Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.810833 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" event={"ID":"fa66113b-5836-4a14-be15-8f2ef6093310","Type":"ContainerDied","Data":"48994deeb70ef0d37db1da11f835db614712a42e17dbc2690dac22983baaa514"} Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.810844 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48994deeb70ef0d37db1da11f835db614712a42e17dbc2690dac22983baaa514" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.812785 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.824970 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.841732 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-svc\") pod \"fa66113b-5836-4a14-be15-8f2ef6093310\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.841777 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-nb\") pod \"fa66113b-5836-4a14-be15-8f2ef6093310\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.876319 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.925751 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa66113b-5836-4a14-be15-8f2ef6093310" (UID: "fa66113b-5836-4a14-be15-8f2ef6093310"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.943149 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn25k\" (UniqueName: \"kubernetes.io/projected/fa66113b-5836-4a14-be15-8f2ef6093310-kube-api-access-qn25k\") pod \"fa66113b-5836-4a14-be15-8f2ef6093310\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.943381 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-swift-storage-0\") pod \"fa66113b-5836-4a14-be15-8f2ef6093310\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.943507 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-config\") pod \"fa66113b-5836-4a14-be15-8f2ef6093310\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.943593 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-sb\") pod \"fa66113b-5836-4a14-be15-8f2ef6093310\" (UID: \"fa66113b-5836-4a14-be15-8f2ef6093310\") " Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.944323 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.949582 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa66113b-5836-4a14-be15-8f2ef6093310" (UID: "fa66113b-5836-4a14-be15-8f2ef6093310"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.950778 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa66113b-5836-4a14-be15-8f2ef6093310-kube-api-access-qn25k" (OuterVolumeSpecName: "kube-api-access-qn25k") pod "fa66113b-5836-4a14-be15-8f2ef6093310" (UID: "fa66113b-5836-4a14-be15-8f2ef6093310"). InnerVolumeSpecName "kube-api-access-qn25k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.986898 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-config" (OuterVolumeSpecName: "config") pod "fa66113b-5836-4a14-be15-8f2ef6093310" (UID: "fa66113b-5836-4a14-be15-8f2ef6093310"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.991105 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa66113b-5836-4a14-be15-8f2ef6093310" (UID: "fa66113b-5836-4a14-be15-8f2ef6093310"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:47 crc kubenswrapper[4767]: I1124 21:56:47.999830 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa66113b-5836-4a14-be15-8f2ef6093310" (UID: "fa66113b-5836-4a14-be15-8f2ef6093310"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.047568 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn25k\" (UniqueName: \"kubernetes.io/projected/fa66113b-5836-4a14-be15-8f2ef6093310-kube-api-access-qn25k\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.047603 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.047613 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.047622 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.047631 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa66113b-5836-4a14-be15-8f2ef6093310-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.336899 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" path="/var/lib/kubelet/pods/40072229-e5be-485f-82d8-7e8c17e2c8c3/volumes" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.820955 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-bzgpq" Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.821082 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="cinder-scheduler" containerID="cri-o://e64f29c9d37a9518e1434010a80ad333e65c0512bb36912cdd66290b10caf390" gracePeriod=30 Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.821157 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="probe" containerID="cri-o://ab9d971750db19c3a8eb23dab6acc7884b7f0acd3755af4495b60efc23ea3f39" gracePeriod=30 Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.843484 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bzgpq"] Nov 24 21:56:48 crc kubenswrapper[4767]: I1124 21:56:48.853425 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-bzgpq"] Nov 24 21:56:49 crc kubenswrapper[4767]: I1124 21:56:49.832488 4767 generic.go:334] "Generic (PLEG): container finished" podID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerID="ab9d971750db19c3a8eb23dab6acc7884b7f0acd3755af4495b60efc23ea3f39" exitCode=0 Nov 24 21:56:49 crc kubenswrapper[4767]: I1124 21:56:49.832524 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08dd7085-e79f-45d5-88f5-434f4d41552e","Type":"ContainerDied","Data":"ab9d971750db19c3a8eb23dab6acc7884b7f0acd3755af4495b60efc23ea3f39"} Nov 24 21:56:50 crc kubenswrapper[4767]: I1124 21:56:50.329178 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa66113b-5836-4a14-be15-8f2ef6093310" path="/var/lib/kubelet/pods/fa66113b-5836-4a14-be15-8f2ef6093310/volumes" Nov 24 21:56:50 crc kubenswrapper[4767]: I1124 21:56:50.854456 4767 generic.go:334] "Generic (PLEG): container finished" podID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerID="c44ade794211c80693ef9ffb3fa8abc892c908856529cc35ab5d29e360efefec" exitCode=0 Nov 24 21:56:50 crc kubenswrapper[4767]: I1124 21:56:50.854491 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b77df9bd4-5cckf" event={"ID":"72d913d0-e2e2-4c49-9775-e16826ebcb2e","Type":"ContainerDied","Data":"c44ade794211c80693ef9ffb3fa8abc892c908856529cc35ab5d29e360efefec"} Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.144558 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.307599 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-combined-ca-bundle\") pod \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.307665 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-config\") pod \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.307715 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-ovndb-tls-certs\") pod \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.307745 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzr5f\" (UniqueName: \"kubernetes.io/projected/72d913d0-e2e2-4c49-9775-e16826ebcb2e-kube-api-access-rzr5f\") pod \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.307819 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-httpd-config\") pod \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\" (UID: \"72d913d0-e2e2-4c49-9775-e16826ebcb2e\") " Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.321186 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d913d0-e2e2-4c49-9775-e16826ebcb2e-kube-api-access-rzr5f" (OuterVolumeSpecName: "kube-api-access-rzr5f") pod "72d913d0-e2e2-4c49-9775-e16826ebcb2e" (UID: "72d913d0-e2e2-4c49-9775-e16826ebcb2e"). InnerVolumeSpecName "kube-api-access-rzr5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.327205 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "72d913d0-e2e2-4c49-9775-e16826ebcb2e" (UID: "72d913d0-e2e2-4c49-9775-e16826ebcb2e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.380011 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-config" (OuterVolumeSpecName: "config") pod "72d913d0-e2e2-4c49-9775-e16826ebcb2e" (UID: "72d913d0-e2e2-4c49-9775-e16826ebcb2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.384317 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72d913d0-e2e2-4c49-9775-e16826ebcb2e" (UID: "72d913d0-e2e2-4c49-9775-e16826ebcb2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.409777 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.409805 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.409817 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzr5f\" (UniqueName: \"kubernetes.io/projected/72d913d0-e2e2-4c49-9775-e16826ebcb2e-kube-api-access-rzr5f\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.409828 4767 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.411160 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "72d913d0-e2e2-4c49-9775-e16826ebcb2e" (UID: "72d913d0-e2e2-4c49-9775-e16826ebcb2e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.511712 4767 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d913d0-e2e2-4c49-9775-e16826ebcb2e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.863701 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b77df9bd4-5cckf" event={"ID":"72d913d0-e2e2-4c49-9775-e16826ebcb2e","Type":"ContainerDied","Data":"802dbcfe8071aebabed0a1fefca9f1393263eebb427ed96c7f1680b8b2c320bc"} Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.863759 4767 scope.go:117] "RemoveContainer" containerID="0bbbb7007afab2707a502a42f9b0af7b254e8ef485eeacb17d1f6f3f86a3b416" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.863790 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b77df9bd4-5cckf" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.889707 4767 scope.go:117] "RemoveContainer" containerID="c44ade794211c80693ef9ffb3fa8abc892c908856529cc35ab5d29e360efefec" Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.895711 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b77df9bd4-5cckf"] Nov 24 21:56:51 crc kubenswrapper[4767]: I1124 21:56:51.903142 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b77df9bd4-5cckf"] Nov 24 21:56:52 crc kubenswrapper[4767]: I1124 21:56:52.325490 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" path="/var/lib/kubelet/pods/72d913d0-e2e2-4c49-9775-e16826ebcb2e/volumes" Nov 24 21:56:52 crc kubenswrapper[4767]: I1124 21:56:52.790719 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:56:52 crc kubenswrapper[4767]: I1124 21:56:52.888755 4767 generic.go:334] "Generic (PLEG): container finished" podID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerID="e64f29c9d37a9518e1434010a80ad333e65c0512bb36912cdd66290b10caf390" exitCode=0 Nov 24 21:56:52 crc kubenswrapper[4767]: I1124 21:56:52.888799 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08dd7085-e79f-45d5-88f5-434f4d41552e","Type":"ContainerDied","Data":"e64f29c9d37a9518e1434010a80ad333e65c0512bb36912cdd66290b10caf390"} Nov 24 21:56:52 crc kubenswrapper[4767]: I1124 21:56:52.983452 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.144605 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08dd7085-e79f-45d5-88f5-434f4d41552e-etc-machine-id\") pod \"08dd7085-e79f-45d5-88f5-434f4d41552e\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.144690 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data\") pod \"08dd7085-e79f-45d5-88f5-434f4d41552e\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.144719 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08dd7085-e79f-45d5-88f5-434f4d41552e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "08dd7085-e79f-45d5-88f5-434f4d41552e" (UID: "08dd7085-e79f-45d5-88f5-434f4d41552e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.144842 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-combined-ca-bundle\") pod \"08dd7085-e79f-45d5-88f5-434f4d41552e\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.144868 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-scripts\") pod \"08dd7085-e79f-45d5-88f5-434f4d41552e\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.144897 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdrbn\" (UniqueName: \"kubernetes.io/projected/08dd7085-e79f-45d5-88f5-434f4d41552e-kube-api-access-jdrbn\") pod \"08dd7085-e79f-45d5-88f5-434f4d41552e\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.144963 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data-custom\") pod \"08dd7085-e79f-45d5-88f5-434f4d41552e\" (UID: \"08dd7085-e79f-45d5-88f5-434f4d41552e\") " Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.145446 4767 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08dd7085-e79f-45d5-88f5-434f4d41552e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.150135 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-scripts" (OuterVolumeSpecName: "scripts") pod "08dd7085-e79f-45d5-88f5-434f4d41552e" (UID: "08dd7085-e79f-45d5-88f5-434f4d41552e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.157464 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08dd7085-e79f-45d5-88f5-434f4d41552e-kube-api-access-jdrbn" (OuterVolumeSpecName: "kube-api-access-jdrbn") pod "08dd7085-e79f-45d5-88f5-434f4d41552e" (UID: "08dd7085-e79f-45d5-88f5-434f4d41552e"). InnerVolumeSpecName "kube-api-access-jdrbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.166792 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08dd7085-e79f-45d5-88f5-434f4d41552e" (UID: "08dd7085-e79f-45d5-88f5-434f4d41552e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.224073 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08dd7085-e79f-45d5-88f5-434f4d41552e" (UID: "08dd7085-e79f-45d5-88f5-434f4d41552e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.247670 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.247696 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.247705 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdrbn\" (UniqueName: \"kubernetes.io/projected/08dd7085-e79f-45d5-88f5-434f4d41552e-kube-api-access-jdrbn\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.247715 4767 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.275423 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data" (OuterVolumeSpecName: "config-data") pod "08dd7085-e79f-45d5-88f5-434f4d41552e" (UID: "08dd7085-e79f-45d5-88f5-434f4d41552e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.349951 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd7085-e79f-45d5-88f5-434f4d41552e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.906209 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08dd7085-e79f-45d5-88f5-434f4d41552e","Type":"ContainerDied","Data":"022185f0e3f081cfc064185b0e0877b5b403898659a0d37e85e9351062c1493b"} Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.906261 4767 scope.go:117] "RemoveContainer" containerID="ab9d971750db19c3a8eb23dab6acc7884b7f0acd3755af4495b60efc23ea3f39" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.906353 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.953126 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.958131 4767 scope.go:117] "RemoveContainer" containerID="e64f29c9d37a9518e1434010a80ad333e65c0512bb36912cdd66290b10caf390" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.969206 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989372 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989790 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-api" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989807 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-api" Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989820 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989826 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api" Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989856 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api-log" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989862 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api-log" Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989871 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="cinder-scheduler" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989876 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="cinder-scheduler" Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989892 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa66113b-5836-4a14-be15-8f2ef6093310" containerName="dnsmasq-dns" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989898 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa66113b-5836-4a14-be15-8f2ef6093310" containerName="dnsmasq-dns" Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989910 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-httpd" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989916 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-httpd" Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989926 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="probe" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989932 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="probe" Nov 24 21:56:53 crc kubenswrapper[4767]: E1124 21:56:53.989947 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa66113b-5836-4a14-be15-8f2ef6093310" containerName="init" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.989953 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa66113b-5836-4a14-be15-8f2ef6093310" containerName="init" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.990112 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa66113b-5836-4a14-be15-8f2ef6093310" containerName="dnsmasq-dns" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.990124 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="probe" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.990133 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" containerName="cinder-scheduler" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.990151 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-httpd" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.990160 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.990177 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d913d0-e2e2-4c49-9775-e16826ebcb2e" containerName="neutron-api" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.990189 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="40072229-e5be-485f-82d8-7e8c17e2c8c3" containerName="barbican-api-log" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.991359 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:56:53 crc kubenswrapper[4767]: I1124 21:56:53.994199 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.019657 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.166398 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.166832 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-config-data\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.166953 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c71dd846-b62a-4f53-aa40-7c55462b2a15-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.167059 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlbbr\" (UniqueName: \"kubernetes.io/projected/c71dd846-b62a-4f53-aa40-7c55462b2a15-kube-api-access-xlbbr\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.167222 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.167357 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-scripts\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.268698 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlbbr\" (UniqueName: \"kubernetes.io/projected/c71dd846-b62a-4f53-aa40-7c55462b2a15-kube-api-access-xlbbr\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.269027 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.269167 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-scripts\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.269422 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.269551 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-config-data\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.269694 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c71dd846-b62a-4f53-aa40-7c55462b2a15-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.269923 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c71dd846-b62a-4f53-aa40-7c55462b2a15-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.275865 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-scripts\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.276253 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.279105 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-config-data\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.287117 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71dd846-b62a-4f53-aa40-7c55462b2a15-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.291655 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlbbr\" (UniqueName: \"kubernetes.io/projected/c71dd846-b62a-4f53-aa40-7c55462b2a15-kube-api-access-xlbbr\") pod \"cinder-scheduler-0\" (UID: \"c71dd846-b62a-4f53-aa40-7c55462b2a15\") " pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.312968 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.325575 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08dd7085-e79f-45d5-88f5-434f4d41552e" path="/var/lib/kubelet/pods/08dd7085-e79f-45d5-88f5-434f4d41552e/volumes" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.403297 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-567c96d68-4rmbm" Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.508484 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d69c9d5c6-qr8nq"] Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.514459 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d69c9d5c6-qr8nq" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon-log" containerID="cri-o://f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad" gracePeriod=30 Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.514564 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d69c9d5c6-qr8nq" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" containerID="cri-o://03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02" gracePeriod=30 Nov 24 21:56:54 crc kubenswrapper[4767]: W1124 21:56:54.838832 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc71dd846_b62a_4f53_aa40_7c55462b2a15.slice/crio-ff912ec177bb6b42750ed6707e69457460026390b948a23255ebb0762d4d106e WatchSource:0}: Error finding container ff912ec177bb6b42750ed6707e69457460026390b948a23255ebb0762d4d106e: Status 404 returned error can't find the container with id ff912ec177bb6b42750ed6707e69457460026390b948a23255ebb0762d4d106e Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.846497 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:56:54 crc kubenswrapper[4767]: I1124 21:56:54.935746 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c71dd846-b62a-4f53-aa40-7c55462b2a15","Type":"ContainerStarted","Data":"ff912ec177bb6b42750ed6707e69457460026390b948a23255ebb0762d4d106e"} Nov 24 21:56:55 crc kubenswrapper[4767]: I1124 21:56:55.947863 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c71dd846-b62a-4f53-aa40-7c55462b2a15","Type":"ContainerStarted","Data":"0b70563141e313e647702d07c672c09d9e532bb467f24ec926e7a0134e599365"} Nov 24 21:56:56 crc kubenswrapper[4767]: I1124 21:56:56.048409 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:56 crc kubenswrapper[4767]: I1124 21:56:56.048455 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d6f9dff64-d2zkv" Nov 24 21:56:56 crc kubenswrapper[4767]: I1124 21:56:56.332790 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7574cdc49f-grwcx" Nov 24 21:56:56 crc kubenswrapper[4767]: I1124 21:56:56.341168 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 21:56:56 crc kubenswrapper[4767]: I1124 21:56:56.962506 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c71dd846-b62a-4f53-aa40-7c55462b2a15","Type":"ContainerStarted","Data":"0ed19270b8de02fe6f479c375aff687d52b8a29ea8c6ad24909a298be24b68df"} Nov 24 21:56:56 crc kubenswrapper[4767]: I1124 21:56:56.984835 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.98480909 podStartE2EDuration="3.98480909s" podCreationTimestamp="2025-11-24 21:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:56:56.980297282 +0000 UTC m=+1099.897280684" watchObservedRunningTime="2025-11-24 21:56:56.98480909 +0000 UTC m=+1099.901792482" Nov 24 21:56:57 crc kubenswrapper[4767]: I1124 21:56:57.971981 4767 generic.go:334] "Generic (PLEG): container finished" podID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerID="03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02" exitCode=0 Nov 24 21:56:57 crc kubenswrapper[4767]: I1124 21:56:57.972171 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d69c9d5c6-qr8nq" event={"ID":"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1","Type":"ContainerDied","Data":"03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02"} Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.245624 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.247193 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.249907 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.250181 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gdz25" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.250325 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.255886 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.313610 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.368938 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.369014 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-openstack-config-secret\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.369128 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-openstack-config\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.369150 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9x8\" (UniqueName: \"kubernetes.io/projected/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-kube-api-access-hb9x8\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.470643 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.470979 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-openstack-config-secret\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.471129 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-openstack-config\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.471167 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9x8\" (UniqueName: \"kubernetes.io/projected/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-kube-api-access-hb9x8\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.472191 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-openstack-config\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.484109 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-openstack-config-secret\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.484188 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.495650 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9x8\" (UniqueName: \"kubernetes.io/projected/9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c-kube-api-access-hb9x8\") pod \"openstackclient\" (UID: \"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c\") " pod="openstack/openstackclient" Nov 24 21:56:59 crc kubenswrapper[4767]: I1124 21:56:59.568653 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:57:00 crc kubenswrapper[4767]: I1124 21:57:00.034925 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:57:00 crc kubenswrapper[4767]: W1124 21:57:00.040138 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d2b6ae4_687d_4fa8_b641_0ddbbf3df57c.slice/crio-c50b5fa9c3268c536ccaf54f8f7e5f7c29b9e60c45798083d7feb35ff3a5952b WatchSource:0}: Error finding container c50b5fa9c3268c536ccaf54f8f7e5f7c29b9e60c45798083d7feb35ff3a5952b: Status 404 returned error can't find the container with id c50b5fa9c3268c536ccaf54f8f7e5f7c29b9e60c45798083d7feb35ff3a5952b Nov 24 21:57:00 crc kubenswrapper[4767]: I1124 21:57:00.802060 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d69c9d5c6-qr8nq" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Nov 24 21:57:00 crc kubenswrapper[4767]: I1124 21:57:00.998179 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c","Type":"ContainerStarted","Data":"c50b5fa9c3268c536ccaf54f8f7e5f7c29b9e60c45798083d7feb35ff3a5952b"} Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.233690 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.234340 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="sg-core" containerID="cri-o://dbbcad4cccb9f458e64b46aa4d3caad203a310ea83d9d3ac4d06c956d4aeeaab" gracePeriod=30 Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.234496 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-notification-agent" containerID="cri-o://98181de5e64e677d4a241f5b9903f93c3c3f57a2dec35428e061aaeab6d7531a" gracePeriod=30 Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.234248 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-central-agent" containerID="cri-o://b0ef98cf2fc8d882dee8212f09f34de17854cc792cf97b9331c014e97f75ef5a" gracePeriod=30 Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.238672 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="proxy-httpd" containerID="cri-o://bb90901e2eb65bf6e76732aeceab63546c1b26e3d1bf0ca30a39ea46be16befe" gracePeriod=30 Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.248611 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.563712 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.585362 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-64b748f489-f8d4f"] Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.587659 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.589683 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.591563 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.591928 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.617489 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64b748f489-f8d4f"] Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.664802 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-combined-ca-bundle\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.664965 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92516271-3ccd-4f57-866d-7242ab4b50c6-run-httpd\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.665012 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92516271-3ccd-4f57-866d-7242ab4b50c6-etc-swift\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.665100 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92516271-3ccd-4f57-866d-7242ab4b50c6-log-httpd\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.667815 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-config-data\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.667880 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9zwp\" (UniqueName: \"kubernetes.io/projected/92516271-3ccd-4f57-866d-7242ab4b50c6-kube-api-access-k9zwp\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.668676 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-internal-tls-certs\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.668981 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-public-tls-certs\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770712 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-internal-tls-certs\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770776 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-public-tls-certs\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770802 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-combined-ca-bundle\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770850 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92516271-3ccd-4f57-866d-7242ab4b50c6-run-httpd\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770868 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92516271-3ccd-4f57-866d-7242ab4b50c6-etc-swift\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770886 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92516271-3ccd-4f57-866d-7242ab4b50c6-log-httpd\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770928 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-config-data\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.770950 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9zwp\" (UniqueName: \"kubernetes.io/projected/92516271-3ccd-4f57-866d-7242ab4b50c6-kube-api-access-k9zwp\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.771538 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92516271-3ccd-4f57-866d-7242ab4b50c6-log-httpd\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.772478 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92516271-3ccd-4f57-866d-7242ab4b50c6-run-httpd\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.779923 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-internal-tls-certs\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.780033 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-config-data\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.781137 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92516271-3ccd-4f57-866d-7242ab4b50c6-etc-swift\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.781294 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-combined-ca-bundle\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.781505 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92516271-3ccd-4f57-866d-7242ab4b50c6-public-tls-certs\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.787912 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9zwp\" (UniqueName: \"kubernetes.io/projected/92516271-3ccd-4f57-866d-7242ab4b50c6-kube-api-access-k9zwp\") pod \"swift-proxy-64b748f489-f8d4f\" (UID: \"92516271-3ccd-4f57-866d-7242ab4b50c6\") " pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:04 crc kubenswrapper[4767]: I1124 21:57:04.912728 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.043434 4767 generic.go:334] "Generic (PLEG): container finished" podID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerID="bb90901e2eb65bf6e76732aeceab63546c1b26e3d1bf0ca30a39ea46be16befe" exitCode=0 Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.043469 4767 generic.go:334] "Generic (PLEG): container finished" podID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerID="dbbcad4cccb9f458e64b46aa4d3caad203a310ea83d9d3ac4d06c956d4aeeaab" exitCode=2 Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.043479 4767 generic.go:334] "Generic (PLEG): container finished" podID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerID="b0ef98cf2fc8d882dee8212f09f34de17854cc792cf97b9331c014e97f75ef5a" exitCode=0 Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.043496 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerDied","Data":"bb90901e2eb65bf6e76732aeceab63546c1b26e3d1bf0ca30a39ea46be16befe"} Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.043520 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerDied","Data":"dbbcad4cccb9f458e64b46aa4d3caad203a310ea83d9d3ac4d06c956d4aeeaab"} Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.043531 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerDied","Data":"b0ef98cf2fc8d882dee8212f09f34de17854cc792cf97b9331c014e97f75ef5a"} Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.481915 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:57:05 crc kubenswrapper[4767]: I1124 21:57:05.482323 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:57:09 crc kubenswrapper[4767]: I1124 21:57:09.350614 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:57:09 crc kubenswrapper[4767]: I1124 21:57:09.351193 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="31f5a67d-feb4-402c-ac35-fc17aca926c5" containerName="watcher-decision-engine" containerID="cri-o://105ba1b3202a9e826b4462af96a973b2dc271b8f111e949256b3083a867bab1d" gracePeriod=30 Nov 24 21:57:10 crc kubenswrapper[4767]: I1124 21:57:10.076814 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:57:10 crc kubenswrapper[4767]: I1124 21:57:10.077060 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-log" containerID="cri-o://f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497" gracePeriod=30 Nov 24 21:57:10 crc kubenswrapper[4767]: I1124 21:57:10.077341 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-httpd" containerID="cri-o://257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217" gracePeriod=30 Nov 24 21:57:10 crc kubenswrapper[4767]: I1124 21:57:10.104908 4767 generic.go:334] "Generic (PLEG): container finished" podID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerID="98181de5e64e677d4a241f5b9903f93c3c3f57a2dec35428e061aaeab6d7531a" exitCode=0 Nov 24 21:57:10 crc kubenswrapper[4767]: I1124 21:57:10.104988 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerDied","Data":"98181de5e64e677d4a241f5b9903f93c3c3f57a2dec35428e061aaeab6d7531a"} Nov 24 21:57:10 crc kubenswrapper[4767]: I1124 21:57:10.802643 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d69c9d5c6-qr8nq" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.124730 4767 generic.go:334] "Generic (PLEG): container finished" podID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerID="f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497" exitCode=143 Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.124828 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f680b41-c2c3-4795-98df-05e64ad8ed95","Type":"ContainerDied","Data":"f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497"} Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.130494 4767 generic.go:334] "Generic (PLEG): container finished" podID="31f5a67d-feb4-402c-ac35-fc17aca926c5" containerID="105ba1b3202a9e826b4462af96a973b2dc271b8f111e949256b3083a867bab1d" exitCode=0 Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.130551 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31f5a67d-feb4-402c-ac35-fc17aca926c5","Type":"ContainerDied","Data":"105ba1b3202a9e826b4462af96a973b2dc271b8f111e949256b3083a867bab1d"} Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.202160 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": dial tcp 10.217.0.184:3000: connect: connection refused" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.557324 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.610103 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704644 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-log-httpd\") pod \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704696 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbwfq\" (UniqueName: \"kubernetes.io/projected/31f5a67d-feb4-402c-ac35-fc17aca926c5-kube-api-access-qbwfq\") pod \"31f5a67d-feb4-402c-ac35-fc17aca926c5\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704789 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-combined-ca-bundle\") pod \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704815 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-sg-core-conf-yaml\") pod \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704844 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f5a67d-feb4-402c-ac35-fc17aca926c5-logs\") pod \"31f5a67d-feb4-402c-ac35-fc17aca926c5\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704871 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-config-data\") pod \"31f5a67d-feb4-402c-ac35-fc17aca926c5\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704885 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-scripts\") pod \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704906 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-677sg\" (UniqueName: \"kubernetes.io/projected/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-kube-api-access-677sg\") pod \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704920 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-combined-ca-bundle\") pod \"31f5a67d-feb4-402c-ac35-fc17aca926c5\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.704948 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-custom-prometheus-ca\") pod \"31f5a67d-feb4-402c-ac35-fc17aca926c5\" (UID: \"31f5a67d-feb4-402c-ac35-fc17aca926c5\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.705004 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-run-httpd\") pod \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.705024 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-config-data\") pod \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\" (UID: \"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d\") " Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.705667 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f5a67d-feb4-402c-ac35-fc17aca926c5-logs" (OuterVolumeSpecName: "logs") pod "31f5a67d-feb4-402c-ac35-fc17aca926c5" (UID: "31f5a67d-feb4-402c-ac35-fc17aca926c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.705740 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" (UID: "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.706033 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" (UID: "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.710573 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-kube-api-access-677sg" (OuterVolumeSpecName: "kube-api-access-677sg") pod "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" (UID: "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d"). InnerVolumeSpecName "kube-api-access-677sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.712537 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-scripts" (OuterVolumeSpecName: "scripts") pod "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" (UID: "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.713359 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f5a67d-feb4-402c-ac35-fc17aca926c5-kube-api-access-qbwfq" (OuterVolumeSpecName: "kube-api-access-qbwfq") pod "31f5a67d-feb4-402c-ac35-fc17aca926c5" (UID: "31f5a67d-feb4-402c-ac35-fc17aca926c5"). InnerVolumeSpecName "kube-api-access-qbwfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.734814 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" (UID: "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.753535 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31f5a67d-feb4-402c-ac35-fc17aca926c5" (UID: "31f5a67d-feb4-402c-ac35-fc17aca926c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.770211 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-config-data" (OuterVolumeSpecName: "config-data") pod "31f5a67d-feb4-402c-ac35-fc17aca926c5" (UID: "31f5a67d-feb4-402c-ac35-fc17aca926c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.774725 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "31f5a67d-feb4-402c-ac35-fc17aca926c5" (UID: "31f5a67d-feb4-402c-ac35-fc17aca926c5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808474 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808503 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f5a67d-feb4-402c-ac35-fc17aca926c5-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808514 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808522 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808531 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-677sg\" (UniqueName: \"kubernetes.io/projected/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-kube-api-access-677sg\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808542 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808550 4767 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31f5a67d-feb4-402c-ac35-fc17aca926c5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808557 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808564 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.808572 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbwfq\" (UniqueName: \"kubernetes.io/projected/31f5a67d-feb4-402c-ac35-fc17aca926c5-kube-api-access-qbwfq\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.814128 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" (UID: "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.838434 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-config-data" (OuterVolumeSpecName: "config-data") pod "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" (UID: "db1d7289-88a9-4dc9-a2de-3adaac6d3c9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:11 crc kubenswrapper[4767]: W1124 21:57:11.885745 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92516271_3ccd_4f57_866d_7242ab4b50c6.slice/crio-d7958a982d904cea65f71334ae2af20f43bae5997b641dab8bebfc8cb0656489 WatchSource:0}: Error finding container d7958a982d904cea65f71334ae2af20f43bae5997b641dab8bebfc8cb0656489: Status 404 returned error can't find the container with id d7958a982d904cea65f71334ae2af20f43bae5997b641dab8bebfc8cb0656489 Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.887752 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64b748f489-f8d4f"] Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.910441 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:11 crc kubenswrapper[4767]: I1124 21:57:11.910634 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.146952 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c","Type":"ContainerStarted","Data":"d4872edcf5fe6edd48af1b604b1f78dcd6135903800e1d6b656dcdba2ba72830"} Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.148811 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64b748f489-f8d4f" event={"ID":"92516271-3ccd-4f57-866d-7242ab4b50c6","Type":"ContainerStarted","Data":"3ccf10cd7aa8952c623873f435047b875fdc83f2d58c05ead6237441d84618f7"} Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.148873 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64b748f489-f8d4f" event={"ID":"92516271-3ccd-4f57-866d-7242ab4b50c6","Type":"ContainerStarted","Data":"d7958a982d904cea65f71334ae2af20f43bae5997b641dab8bebfc8cb0656489"} Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.154024 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db1d7289-88a9-4dc9-a2de-3adaac6d3c9d","Type":"ContainerDied","Data":"fd5d73a94b6669d533e390bc6f9f47c73429d64a2d98fcc13a5011a2a5133848"} Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.154084 4767 scope.go:117] "RemoveContainer" containerID="bb90901e2eb65bf6e76732aeceab63546c1b26e3d1bf0ca30a39ea46be16befe" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.154304 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.160636 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31f5a67d-feb4-402c-ac35-fc17aca926c5","Type":"ContainerDied","Data":"aa90c75b4150a78587a2d5f8e544b9849198f6ef3bbd1956035bd561f423a398"} Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.160714 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.275413 4767 scope.go:117] "RemoveContainer" containerID="dbbcad4cccb9f458e64b46aa4d3caad203a310ea83d9d3ac4d06c956d4aeeaab" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.297411 4767 scope.go:117] "RemoveContainer" containerID="98181de5e64e677d4a241f5b9903f93c3c3f57a2dec35428e061aaeab6d7531a" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.304353 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.03918302 podStartE2EDuration="13.304331478s" podCreationTimestamp="2025-11-24 21:56:59 +0000 UTC" firstStartedPulling="2025-11-24 21:57:00.042657923 +0000 UTC m=+1102.959641295" lastFinishedPulling="2025-11-24 21:57:11.307806381 +0000 UTC m=+1114.224789753" observedRunningTime="2025-11-24 21:57:12.174310015 +0000 UTC m=+1115.091293387" watchObservedRunningTime="2025-11-24 21:57:12.304331478 +0000 UTC m=+1115.221314850" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.304922 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.339040 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.356770 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.362746 4767 scope.go:117] "RemoveContainer" containerID="b0ef98cf2fc8d882dee8212f09f34de17854cc792cf97b9331c014e97f75ef5a" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.370312 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387321 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: E1124 21:57:12.387713 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f5a67d-feb4-402c-ac35-fc17aca926c5" containerName="watcher-decision-engine" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387732 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f5a67d-feb4-402c-ac35-fc17aca926c5" containerName="watcher-decision-engine" Nov 24 21:57:12 crc kubenswrapper[4767]: E1124 21:57:12.387752 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-notification-agent" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387759 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-notification-agent" Nov 24 21:57:12 crc kubenswrapper[4767]: E1124 21:57:12.387776 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="proxy-httpd" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387782 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="proxy-httpd" Nov 24 21:57:12 crc kubenswrapper[4767]: E1124 21:57:12.387798 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="sg-core" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387804 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="sg-core" Nov 24 21:57:12 crc kubenswrapper[4767]: E1124 21:57:12.387811 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-central-agent" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387816 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-central-agent" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387981 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-notification-agent" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.387998 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="ceilometer-central-agent" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.388009 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="sg-core" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.388019 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" containerName="proxy-httpd" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.388028 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f5a67d-feb4-402c-ac35-fc17aca926c5" containerName="watcher-decision-engine" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.388651 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.391983 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.392198 4767 scope.go:117] "RemoveContainer" containerID="105ba1b3202a9e826b4462af96a973b2dc271b8f111e949256b3083a867bab1d" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.394191 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.402499 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.404682 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.407965 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.411923 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.412370 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521389 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521441 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521476 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521536 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c75pr\" (UniqueName: \"kubernetes.io/projected/c3731da6-5a54-4794-a84b-a8269acaabc5-kube-api-access-c75pr\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521589 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521619 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521658 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-log-httpd\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521692 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96538813-044f-45a6-b596-07f9dec093c6-logs\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521736 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-scripts\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521828 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-run-httpd\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521915 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-config-data\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.521976 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz9hj\" (UniqueName: \"kubernetes.io/projected/96538813-044f-45a6-b596-07f9dec093c6-kube-api-access-tz9hj\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.550229 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: E1124 21:57:12.550907 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-c75pr log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="c3731da6-5a54-4794-a84b-a8269acaabc5" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624281 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624355 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c75pr\" (UniqueName: \"kubernetes.io/projected/c3731da6-5a54-4794-a84b-a8269acaabc5-kube-api-access-c75pr\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624406 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624432 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-log-httpd\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624506 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96538813-044f-45a6-b596-07f9dec093c6-logs\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624545 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-scripts\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624587 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-run-httpd\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624630 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-config-data\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624663 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz9hj\" (UniqueName: \"kubernetes.io/projected/96538813-044f-45a6-b596-07f9dec093c6-kube-api-access-tz9hj\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624712 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.624736 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.625698 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-run-httpd\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.625981 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-log-httpd\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.627607 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96538813-044f-45a6-b596-07f9dec093c6-logs\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.631901 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.631970 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.632143 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-scripts\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.632420 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.634222 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96538813-044f-45a6-b596-07f9dec093c6-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.634507 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.643816 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-config-data\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.647152 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c75pr\" (UniqueName: \"kubernetes.io/projected/c3731da6-5a54-4794-a84b-a8269acaabc5-kube-api-access-c75pr\") pod \"ceilometer-0\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " pod="openstack/ceilometer-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.650014 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz9hj\" (UniqueName: \"kubernetes.io/projected/96538813-044f-45a6-b596-07f9dec093c6-kube-api-access-tz9hj\") pod \"watcher-decision-engine-0\" (UID: \"96538813-044f-45a6-b596-07f9dec093c6\") " pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.709125 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.901619 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.904029 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-log" containerID="cri-o://ee8d79c430c570e23cc92c31a317c8619eb8070684ee32ea9790451e8ccd57b9" gracePeriod=30 Nov 24 21:57:12 crc kubenswrapper[4767]: I1124 21:57:12.904744 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-httpd" containerID="cri-o://5bc1de983909cf8b558f2c3434823057f0116823799e07bcb1042fc8ecec3d57" gracePeriod=30 Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.211432 4767 generic.go:334] "Generic (PLEG): container finished" podID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerID="ee8d79c430c570e23cc92c31a317c8619eb8070684ee32ea9790451e8ccd57b9" exitCode=143 Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.211512 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fa93f02-121b-49f9-a08b-e04f44a142f8","Type":"ContainerDied","Data":"ee8d79c430c570e23cc92c31a317c8619eb8070684ee32ea9790451e8ccd57b9"} Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.216879 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64b748f489-f8d4f" event={"ID":"92516271-3ccd-4f57-866d-7242ab4b50c6","Type":"ContainerStarted","Data":"a484e58a4311e964baa4e786dfd5d93e79d786f68d7a3cdac3d4bafec7afb6de"} Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.216928 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.219307 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.264783 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.267296 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-64b748f489-f8d4f" podStartSLOduration=9.267249124 podStartE2EDuration="9.267249124s" podCreationTimestamp="2025-11-24 21:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:57:13.266518603 +0000 UTC m=+1116.183501975" watchObservedRunningTime="2025-11-24 21:57:13.267249124 +0000 UTC m=+1116.184232496" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.272416 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.339611 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-sg-core-conf-yaml\") pod \"c3731da6-5a54-4794-a84b-a8269acaabc5\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.341884 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-log-httpd\") pod \"c3731da6-5a54-4794-a84b-a8269acaabc5\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.342137 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-run-httpd\") pod \"c3731da6-5a54-4794-a84b-a8269acaabc5\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.342258 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-combined-ca-bundle\") pod \"c3731da6-5a54-4794-a84b-a8269acaabc5\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.342454 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-config-data\") pod \"c3731da6-5a54-4794-a84b-a8269acaabc5\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.342652 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c75pr\" (UniqueName: \"kubernetes.io/projected/c3731da6-5a54-4794-a84b-a8269acaabc5-kube-api-access-c75pr\") pod \"c3731da6-5a54-4794-a84b-a8269acaabc5\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.342729 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c3731da6-5a54-4794-a84b-a8269acaabc5" (UID: "c3731da6-5a54-4794-a84b-a8269acaabc5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.342756 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-scripts\") pod \"c3731da6-5a54-4794-a84b-a8269acaabc5\" (UID: \"c3731da6-5a54-4794-a84b-a8269acaabc5\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.343803 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.344509 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c3731da6-5a54-4794-a84b-a8269acaabc5" (UID: "c3731da6-5a54-4794-a84b-a8269acaabc5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.355512 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-scripts" (OuterVolumeSpecName: "scripts") pod "c3731da6-5a54-4794-a84b-a8269acaabc5" (UID: "c3731da6-5a54-4794-a84b-a8269acaabc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.374478 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c3731da6-5a54-4794-a84b-a8269acaabc5" (UID: "c3731da6-5a54-4794-a84b-a8269acaabc5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.377089 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3731da6-5a54-4794-a84b-a8269acaabc5" (UID: "c3731da6-5a54-4794-a84b-a8269acaabc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.389379 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-config-data" (OuterVolumeSpecName: "config-data") pod "c3731da6-5a54-4794-a84b-a8269acaabc5" (UID: "c3731da6-5a54-4794-a84b-a8269acaabc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.391296 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3731da6-5a54-4794-a84b-a8269acaabc5-kube-api-access-c75pr" (OuterVolumeSpecName: "kube-api-access-c75pr") pod "c3731da6-5a54-4794-a84b-a8269acaabc5" (UID: "c3731da6-5a54-4794-a84b-a8269acaabc5"). InnerVolumeSpecName "kube-api-access-c75pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.414302 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.165:9292/healthcheck\": read tcp 10.217.0.2:51400->10.217.0.165:9292: read: connection reset by peer" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.414606 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.165:9292/healthcheck\": read tcp 10.217.0.2:51398->10.217.0.165:9292: read: connection reset by peer" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.448526 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3731da6-5a54-4794-a84b-a8269acaabc5-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.448570 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.448585 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.448597 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c75pr\" (UniqueName: \"kubernetes.io/projected/c3731da6-5a54-4794-a84b-a8269acaabc5-kube-api-access-c75pr\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.448609 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.448619 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3731da6-5a54-4794-a84b-a8269acaabc5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.905376 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.958876 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-logs\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.958946 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-scripts\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.958966 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-config-data\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.958984 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-httpd-run\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.959011 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-combined-ca-bundle\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.959470 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-logs" (OuterVolumeSpecName: "logs") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.959642 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.959807 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6cx\" (UniqueName: \"kubernetes.io/projected/8f680b41-c2c3-4795-98df-05e64ad8ed95-kube-api-access-5w6cx\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.959841 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.960386 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-public-tls-certs\") pod \"8f680b41-c2c3-4795-98df-05e64ad8ed95\" (UID: \"8f680b41-c2c3-4795-98df-05e64ad8ed95\") " Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.960909 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.960922 4767 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f680b41-c2c3-4795-98df-05e64ad8ed95-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.963514 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-scripts" (OuterVolumeSpecName: "scripts") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.963716 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f680b41-c2c3-4795-98df-05e64ad8ed95-kube-api-access-5w6cx" (OuterVolumeSpecName: "kube-api-access-5w6cx") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "kube-api-access-5w6cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.965480 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:57:13 crc kubenswrapper[4767]: I1124 21:57:13.992385 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.006834 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-config-data" (OuterVolumeSpecName: "config-data") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.034699 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8f680b41-c2c3-4795-98df-05e64ad8ed95" (UID: "8f680b41-c2c3-4795-98df-05e64ad8ed95"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.063677 4767 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.064106 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.064216 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.064370 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f680b41-c2c3-4795-98df-05e64ad8ed95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.064455 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6cx\" (UniqueName: \"kubernetes.io/projected/8f680b41-c2c3-4795-98df-05e64ad8ed95-kube-api-access-5w6cx\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.064603 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.087107 4767 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.166974 4767 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.228256 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96538813-044f-45a6-b596-07f9dec093c6","Type":"ContainerStarted","Data":"d804661df04a2013527094c32a60d7a94b22d78a19128d029a8b8c7d8a8033a9"} Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.228317 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96538813-044f-45a6-b596-07f9dec093c6","Type":"ContainerStarted","Data":"bbf104ee3ffe1f336c747c57168ff1916e23d1ac6e8a516851cf79930dad08a8"} Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.231891 4767 generic.go:334] "Generic (PLEG): container finished" podID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerID="257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217" exitCode=0 Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.232164 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.232075 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f680b41-c2c3-4795-98df-05e64ad8ed95","Type":"ContainerDied","Data":"257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217"} Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.232410 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f680b41-c2c3-4795-98df-05e64ad8ed95","Type":"ContainerDied","Data":"f17edf1843de8dfdf689ce0190c45429621af795fb6c53ffcceae8b196dee589"} Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.232439 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.233212 4767 scope.go:117] "RemoveContainer" containerID="257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.233415 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.254044 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.254020046 podStartE2EDuration="2.254020046s" podCreationTimestamp="2025-11-24 21:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:57:14.248927191 +0000 UTC m=+1117.165910593" watchObservedRunningTime="2025-11-24 21:57:14.254020046 +0000 UTC m=+1117.171003458" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.263612 4767 scope.go:117] "RemoveContainer" containerID="f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.295836 4767 scope.go:117] "RemoveContainer" containerID="257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217" Nov 24 21:57:14 crc kubenswrapper[4767]: E1124 21:57:14.296477 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217\": container with ID starting with 257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217 not found: ID does not exist" containerID="257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.296505 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217"} err="failed to get container status \"257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217\": rpc error: code = NotFound desc = could not find container \"257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217\": container with ID starting with 257bf044f1e11370a168ddf844d34201ebc07925b794ba714edc0751d6acd217 not found: ID does not exist" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.296526 4767 scope.go:117] "RemoveContainer" containerID="f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497" Nov 24 21:57:14 crc kubenswrapper[4767]: E1124 21:57:14.297438 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497\": container with ID starting with f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497 not found: ID does not exist" containerID="f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.297464 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497"} err="failed to get container status \"f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497\": rpc error: code = NotFound desc = could not find container \"f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497\": container with ID starting with f2a121f8948d41a7ec74404a2c8203d38d457bf4a25401d1d0fd9edbd1ebe497 not found: ID does not exist" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.302163 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.328439 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f5a67d-feb4-402c-ac35-fc17aca926c5" path="/var/lib/kubelet/pods/31f5a67d-feb4-402c-ac35-fc17aca926c5/volumes" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.329645 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1d7289-88a9-4dc9-a2de-3adaac6d3c9d" path="/var/lib/kubelet/pods/db1d7289-88a9-4dc9-a2de-3adaac6d3c9d/volumes" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.343103 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.355417 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.373937 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.387093 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: E1124 21:57:14.387503 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-httpd" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.387516 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-httpd" Nov 24 21:57:14 crc kubenswrapper[4767]: E1124 21:57:14.387535 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-log" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.387541 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-log" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.387729 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-httpd" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.387745 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" containerName="glance-log" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.390064 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.392324 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.394881 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.398486 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.400415 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.402660 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.402920 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.405560 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.414990 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.474724 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-config-data\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.474837 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-log-httpd\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.474876 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpcg\" (UniqueName: \"kubernetes.io/projected/839aba43-26fd-43cc-a67d-c7069f0a3f30-kube-api-access-sjpcg\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.474911 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvvbm\" (UniqueName: \"kubernetes.io/projected/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-kube-api-access-hvvbm\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.474930 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/839aba43-26fd-43cc-a67d-c7069f0a3f30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475037 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-scripts\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475066 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475086 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-config-data\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475115 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839aba43-26fd-43cc-a67d-c7069f0a3f30-logs\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475152 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475263 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-scripts\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475395 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-run-httpd\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475437 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475466 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.475494 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.526799 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:14 crc kubenswrapper[4767]: E1124 21:57:14.527609 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-hvvbm log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="c28d7ef5-bb55-48d2-b78c-ba085531ad1e" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578016 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-scripts\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578103 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578155 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-config-data\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578211 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839aba43-26fd-43cc-a67d-c7069f0a3f30-logs\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578257 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578332 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-scripts\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578452 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-run-httpd\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578480 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578524 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578561 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578607 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-config-data\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578660 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-log-httpd\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578718 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpcg\" (UniqueName: \"kubernetes.io/projected/839aba43-26fd-43cc-a67d-c7069f0a3f30-kube-api-access-sjpcg\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578789 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvvbm\" (UniqueName: \"kubernetes.io/projected/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-kube-api-access-hvvbm\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.578837 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/839aba43-26fd-43cc-a67d-c7069f0a3f30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.579433 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/839aba43-26fd-43cc-a67d-c7069f0a3f30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.582754 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.583227 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-run-httpd\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.583393 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-log-httpd\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.583479 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-scripts\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.583701 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839aba43-26fd-43cc-a67d-c7069f0a3f30-logs\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.589950 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.591225 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.591748 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-config-data\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.591910 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-config-data\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.593169 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.598538 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-scripts\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.600090 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839aba43-26fd-43cc-a67d-c7069f0a3f30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.601220 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvvbm\" (UniqueName: \"kubernetes.io/projected/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-kube-api-access-hvvbm\") pod \"ceilometer-0\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " pod="openstack/ceilometer-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.606342 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpcg\" (UniqueName: \"kubernetes.io/projected/839aba43-26fd-43cc-a67d-c7069f0a3f30-kube-api-access-sjpcg\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.647869 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"839aba43-26fd-43cc-a67d-c7069f0a3f30\") " pod="openstack/glance-default-external-api-0" Nov 24 21:57:14 crc kubenswrapper[4767]: I1124 21:57:14.731775 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.241456 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.255333 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.391856 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-config-data\") pod \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.391900 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-run-httpd\") pod \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.391937 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-scripts\") pod \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.394186 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c28d7ef5-bb55-48d2-b78c-ba085531ad1e" (UID: "c28d7ef5-bb55-48d2-b78c-ba085531ad1e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.410452 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvvbm\" (UniqueName: \"kubernetes.io/projected/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-kube-api-access-hvvbm\") pod \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.411331 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-sg-core-conf-yaml\") pod \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.414189 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-combined-ca-bundle\") pod \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.414501 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-log-httpd\") pod \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\" (UID: \"c28d7ef5-bb55-48d2-b78c-ba085531ad1e\") " Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.414941 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c28d7ef5-bb55-48d2-b78c-ba085531ad1e" (UID: "c28d7ef5-bb55-48d2-b78c-ba085531ad1e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.420194 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:15 crc kubenswrapper[4767]: I1124 21:57:15.420253 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.181022 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-config-data" (OuterVolumeSpecName: "config-data") pod "c28d7ef5-bb55-48d2-b78c-ba085531ad1e" (UID: "c28d7ef5-bb55-48d2-b78c-ba085531ad1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.181124 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-kube-api-access-hvvbm" (OuterVolumeSpecName: "kube-api-access-hvvbm") pod "c28d7ef5-bb55-48d2-b78c-ba085531ad1e" (UID: "c28d7ef5-bb55-48d2-b78c-ba085531ad1e"). InnerVolumeSpecName "kube-api-access-hvvbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.181167 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c28d7ef5-bb55-48d2-b78c-ba085531ad1e" (UID: "c28d7ef5-bb55-48d2-b78c-ba085531ad1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.181763 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-scripts" (OuterVolumeSpecName: "scripts") pod "c28d7ef5-bb55-48d2-b78c-ba085531ad1e" (UID: "c28d7ef5-bb55-48d2-b78c-ba085531ad1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.204708 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c28d7ef5-bb55-48d2-b78c-ba085531ad1e" (UID: "c28d7ef5-bb55-48d2-b78c-ba085531ad1e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.255340 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.255357 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.255366 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.257175 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.257186 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvvbm\" (UniqueName: \"kubernetes.io/projected/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-kube-api-access-hvvbm\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.257197 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c28d7ef5-bb55-48d2-b78c-ba085531ad1e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.363904 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f680b41-c2c3-4795-98df-05e64ad8ed95" path="/var/lib/kubelet/pods/8f680b41-c2c3-4795-98df-05e64ad8ed95/volumes" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.365038 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3731da6-5a54-4794-a84b-a8269acaabc5" path="/var/lib/kubelet/pods/c3731da6-5a54-4794-a84b-a8269acaabc5/volumes" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.374320 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.380608 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.388059 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.391179 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.394663 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.395422 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.396012 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.568536 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-log-httpd\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.568804 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-run-httpd\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.568964 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.569150 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-scripts\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.569219 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnhcn\" (UniqueName: \"kubernetes.io/projected/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-kube-api-access-cnhcn\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.569244 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.569351 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-config-data\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671135 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-config-data\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671198 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-log-httpd\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671230 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-run-httpd\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671298 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671386 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-scripts\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671433 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnhcn\" (UniqueName: \"kubernetes.io/projected/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-kube-api-access-cnhcn\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671456 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.671818 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-log-httpd\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.672043 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-run-httpd\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.676669 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.677162 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-config-data\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.677665 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.682901 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-scripts\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.689515 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnhcn\" (UniqueName: \"kubernetes.io/projected/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-kube-api-access-cnhcn\") pod \"ceilometer-0\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " pod="openstack/ceilometer-0" Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.728298 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:57:16 crc kubenswrapper[4767]: I1124 21:57:16.769818 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.227223 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:17 crc kubenswrapper[4767]: W1124 21:57:17.237384 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68d5df3b_e1f1_4f80_880a_cae72acdf3f7.slice/crio-54dac21bff61792c6a04766325f8333588f2a3c4902caa4330c47a0e681606f7 WatchSource:0}: Error finding container 54dac21bff61792c6a04766325f8333588f2a3c4902caa4330c47a0e681606f7: Status 404 returned error can't find the container with id 54dac21bff61792c6a04766325f8333588f2a3c4902caa4330c47a0e681606f7 Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.269442 4767 generic.go:334] "Generic (PLEG): container finished" podID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerID="5bc1de983909cf8b558f2c3434823057f0116823799e07bcb1042fc8ecec3d57" exitCode=0 Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.269527 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fa93f02-121b-49f9-a08b-e04f44a142f8","Type":"ContainerDied","Data":"5bc1de983909cf8b558f2c3434823057f0116823799e07bcb1042fc8ecec3d57"} Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.271005 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"839aba43-26fd-43cc-a67d-c7069f0a3f30","Type":"ContainerStarted","Data":"49677009e924b20c274b999cc881e5dda01104bbd176dd2be1888bf4fc57caef"} Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.272128 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerStarted","Data":"54dac21bff61792c6a04766325f8333588f2a3c4902caa4330c47a0e681606f7"} Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.539749 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.700745 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8ntq\" (UniqueName: \"kubernetes.io/projected/1fa93f02-121b-49f9-a08b-e04f44a142f8-kube-api-access-f8ntq\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.701806 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.701853 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-scripts\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.701910 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-config-data\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.701998 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-combined-ca-bundle\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.702080 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-internal-tls-certs\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.702127 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-logs\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.702284 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-httpd-run\") pod \"1fa93f02-121b-49f9-a08b-e04f44a142f8\" (UID: \"1fa93f02-121b-49f9-a08b-e04f44a142f8\") " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.703565 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.708152 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-scripts" (OuterVolumeSpecName: "scripts") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.708206 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa93f02-121b-49f9-a08b-e04f44a142f8-kube-api-access-f8ntq" (OuterVolumeSpecName: "kube-api-access-f8ntq") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "kube-api-access-f8ntq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.712198 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-logs" (OuterVolumeSpecName: "logs") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.723632 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.743675 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.769353 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-config-data" (OuterVolumeSpecName: "config-data") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.770799 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1fa93f02-121b-49f9-a08b-e04f44a142f8" (UID: "1fa93f02-121b-49f9-a08b-e04f44a142f8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805204 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805241 4767 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa93f02-121b-49f9-a08b-e04f44a142f8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805254 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8ntq\" (UniqueName: \"kubernetes.io/projected/1fa93f02-121b-49f9-a08b-e04f44a142f8-kube-api-access-f8ntq\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805309 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805323 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805334 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805349 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.805375 4767 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa93f02-121b-49f9-a08b-e04f44a142f8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.827864 4767 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 21:57:17 crc kubenswrapper[4767]: I1124 21:57:17.909586 4767 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.285028 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"839aba43-26fd-43cc-a67d-c7069f0a3f30","Type":"ContainerStarted","Data":"83a42fee80b6e9fafba52353a8716f655f40185c8de0252451dd9d513d3c1bda"} Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.285402 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"839aba43-26fd-43cc-a67d-c7069f0a3f30","Type":"ContainerStarted","Data":"b00d7c570c644e648cc3efea9c4d370b09a9ff8c9737a4a375e4672be504d2a9"} Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.287823 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerStarted","Data":"dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f"} Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.295260 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fa93f02-121b-49f9-a08b-e04f44a142f8","Type":"ContainerDied","Data":"b25ba1efa2f8f42625562634179f200a08ca392c68452b58f5cc2f139463bcfa"} Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.295323 4767 scope.go:117] "RemoveContainer" containerID="5bc1de983909cf8b558f2c3434823057f0116823799e07bcb1042fc8ecec3d57" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.295351 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.317971 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.317953381 podStartE2EDuration="4.317953381s" podCreationTimestamp="2025-11-24 21:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:57:18.301157065 +0000 UTC m=+1121.218140447" watchObservedRunningTime="2025-11-24 21:57:18.317953381 +0000 UTC m=+1121.234936753" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.365129 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28d7ef5-bb55-48d2-b78c-ba085531ad1e" path="/var/lib/kubelet/pods/c28d7ef5-bb55-48d2-b78c-ba085531ad1e/volumes" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.388523 4767 scope.go:117] "RemoveContainer" containerID="ee8d79c430c570e23cc92c31a317c8619eb8070684ee32ea9790451e8ccd57b9" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.417916 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.426473 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.436943 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:57:18 crc kubenswrapper[4767]: E1124 21:57:18.437330 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-log" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.437346 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-log" Nov 24 21:57:18 crc kubenswrapper[4767]: E1124 21:57:18.437386 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-httpd" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.437392 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-httpd" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.437562 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-log" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.437585 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" containerName="glance-httpd" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.438632 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.442241 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.442424 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.477076 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521564 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521647 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc9jx\" (UniqueName: \"kubernetes.io/projected/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-kube-api-access-lc9jx\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521678 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521697 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521735 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521761 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521788 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.521806 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.547148 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623689 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623729 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623784 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623847 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc9jx\" (UniqueName: \"kubernetes.io/projected/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-kube-api-access-lc9jx\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623878 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623897 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623942 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.623974 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.624133 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.624458 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.624562 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.628895 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.629713 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.630123 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.639227 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc9jx\" (UniqueName: \"kubernetes.io/projected/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-kube-api-access-lc9jx\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.642848 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51f52fc1-4ddc-46a5-81a4-f1a6330b86e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.661060 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:57:18 crc kubenswrapper[4767]: I1124 21:57:18.766621 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:19 crc kubenswrapper[4767]: I1124 21:57:19.306677 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerStarted","Data":"9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e"} Nov 24 21:57:19 crc kubenswrapper[4767]: I1124 21:57:19.334794 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:57:19 crc kubenswrapper[4767]: W1124 21:57:19.336396 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51f52fc1_4ddc_46a5_81a4_f1a6330b86e2.slice/crio-cc35811e2f5fd4318e546cbcac32e4b549d3945bc2dcd2a33784243a3aa843cf WatchSource:0}: Error finding container cc35811e2f5fd4318e546cbcac32e4b549d3945bc2dcd2a33784243a3aa843cf: Status 404 returned error can't find the container with id cc35811e2f5fd4318e546cbcac32e4b549d3945bc2dcd2a33784243a3aa843cf Nov 24 21:57:19 crc kubenswrapper[4767]: I1124 21:57:19.926720 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:19 crc kubenswrapper[4767]: I1124 21:57:19.928597 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64b748f489-f8d4f" Nov 24 21:57:20 crc kubenswrapper[4767]: I1124 21:57:20.325059 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa93f02-121b-49f9-a08b-e04f44a142f8" path="/var/lib/kubelet/pods/1fa93f02-121b-49f9-a08b-e04f44a142f8/volumes" Nov 24 21:57:20 crc kubenswrapper[4767]: I1124 21:57:20.325796 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerStarted","Data":"8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0"} Nov 24 21:57:20 crc kubenswrapper[4767]: I1124 21:57:20.326383 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2","Type":"ContainerStarted","Data":"c57695e6188e715b852594daa2b29947e8abdcffc55f8916c183b36f2e86a823"} Nov 24 21:57:20 crc kubenswrapper[4767]: I1124 21:57:20.326420 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2","Type":"ContainerStarted","Data":"cc35811e2f5fd4318e546cbcac32e4b549d3945bc2dcd2a33784243a3aa843cf"} Nov 24 21:57:20 crc kubenswrapper[4767]: I1124 21:57:20.802696 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d69c9d5c6-qr8nq" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Nov 24 21:57:20 crc kubenswrapper[4767]: I1124 21:57:20.803041 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.338726 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"51f52fc1-4ddc-46a5-81a4-f1a6330b86e2","Type":"ContainerStarted","Data":"7618d525954f890a3b9eefd81f6556f36e9591de44274299a6462ed83abe0c43"} Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.344400 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerStarted","Data":"b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1"} Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.344555 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-central-agent" containerID="cri-o://dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f" gracePeriod=30 Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.344606 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="sg-core" containerID="cri-o://8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0" gracePeriod=30 Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.344628 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="proxy-httpd" containerID="cri-o://b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1" gracePeriod=30 Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.344657 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-notification-agent" containerID="cri-o://9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e" gracePeriod=30 Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.344639 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.357401 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.3573739160000002 podStartE2EDuration="3.357373916s" podCreationTimestamp="2025-11-24 21:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:57:21.356293755 +0000 UTC m=+1124.273277147" watchObservedRunningTime="2025-11-24 21:57:21.357373916 +0000 UTC m=+1124.274357328" Nov 24 21:57:21 crc kubenswrapper[4767]: I1124 21:57:21.392115 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.695505719 podStartE2EDuration="5.392092999s" podCreationTimestamp="2025-11-24 21:57:16 +0000 UTC" firstStartedPulling="2025-11-24 21:57:17.241564731 +0000 UTC m=+1120.158548093" lastFinishedPulling="2025-11-24 21:57:20.938152001 +0000 UTC m=+1123.855135373" observedRunningTime="2025-11-24 21:57:21.382899389 +0000 UTC m=+1124.299882771" watchObservedRunningTime="2025-11-24 21:57:21.392092999 +0000 UTC m=+1124.309076391" Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.363150 4767 generic.go:334] "Generic (PLEG): container finished" podID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerID="b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1" exitCode=0 Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.363493 4767 generic.go:334] "Generic (PLEG): container finished" podID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerID="8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0" exitCode=2 Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.363503 4767 generic.go:334] "Generic (PLEG): container finished" podID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerID="9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e" exitCode=0 Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.363233 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerDied","Data":"b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1"} Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.363578 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerDied","Data":"8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0"} Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.363609 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerDied","Data":"9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e"} Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.709518 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:22 crc kubenswrapper[4767]: I1124 21:57:22.754164 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:23 crc kubenswrapper[4767]: I1124 21:57:23.375250 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:23 crc kubenswrapper[4767]: I1124 21:57:23.409874 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Nov 24 21:57:24 crc kubenswrapper[4767]: I1124 21:57:24.733684 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:57:24 crc kubenswrapper[4767]: I1124 21:57:24.735968 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:57:24 crc kubenswrapper[4767]: I1124 21:57:24.780736 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:57:24 crc kubenswrapper[4767]: I1124 21:57:24.791866 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:57:24 crc kubenswrapper[4767]: I1124 21:57:24.950978 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.146257 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv5t4\" (UniqueName: \"kubernetes.io/projected/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-kube-api-access-mv5t4\") pod \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.146319 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-combined-ca-bundle\") pod \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.146401 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-tls-certs\") pod \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.146475 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-logs\") pod \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.146555 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-secret-key\") pod \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.146608 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-config-data\") pod \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.146629 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-scripts\") pod \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\" (UID: \"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1\") " Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.148088 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-logs" (OuterVolumeSpecName: "logs") pod "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" (UID: "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.159574 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-kube-api-access-mv5t4" (OuterVolumeSpecName: "kube-api-access-mv5t4") pod "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" (UID: "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1"). InnerVolumeSpecName "kube-api-access-mv5t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.160022 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" (UID: "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.183164 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-config-data" (OuterVolumeSpecName: "config-data") pod "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" (UID: "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.189161 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-scripts" (OuterVolumeSpecName: "scripts") pod "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" (UID: "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.198211 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" (UID: "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.207005 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" (UID: "5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.249017 4767 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.249054 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.249063 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.249075 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv5t4\" (UniqueName: \"kubernetes.io/projected/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-kube-api-access-mv5t4\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.249084 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.249092 4767 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.249100 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.398678 4767 generic.go:334] "Generic (PLEG): container finished" podID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerID="f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad" exitCode=137 Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.398706 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d69c9d5c6-qr8nq" event={"ID":"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1","Type":"ContainerDied","Data":"f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad"} Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.398743 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d69c9d5c6-qr8nq" event={"ID":"5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1","Type":"ContainerDied","Data":"e1f8ef7cdd40d10ca6d1d25295054d5b80f7439ecdeee23fb84d80be97e10390"} Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.398759 4767 scope.go:117] "RemoveContainer" containerID="03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.399161 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.399175 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.399560 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d69c9d5c6-qr8nq" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.471737 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d69c9d5c6-qr8nq"] Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.486640 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d69c9d5c6-qr8nq"] Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.563222 4767 scope.go:117] "RemoveContainer" containerID="f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.589148 4767 scope.go:117] "RemoveContainer" containerID="03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02" Nov 24 21:57:25 crc kubenswrapper[4767]: E1124 21:57:25.589637 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02\": container with ID starting with 03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02 not found: ID does not exist" containerID="03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.589753 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02"} err="failed to get container status \"03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02\": rpc error: code = NotFound desc = could not find container \"03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02\": container with ID starting with 03e1231229dffddf0597dbe68e027adeeb5e48d723fd1a98dac847d6de683d02 not found: ID does not exist" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.589836 4767 scope.go:117] "RemoveContainer" containerID="f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad" Nov 24 21:57:25 crc kubenswrapper[4767]: E1124 21:57:25.590232 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad\": container with ID starting with f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad not found: ID does not exist" containerID="f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad" Nov 24 21:57:25 crc kubenswrapper[4767]: I1124 21:57:25.590284 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad"} err="failed to get container status \"f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad\": rpc error: code = NotFound desc = could not find container \"f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad\": container with ID starting with f04f9d7c6a68eeaf1c3136e56eabeafd6afa514b060cf754a69c0410c90dc7ad not found: ID does not exist" Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.326418 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" path="/var/lib/kubelet/pods/5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1/volumes" Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.838872 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.982243 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-scripts\") pod \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.982323 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnhcn\" (UniqueName: \"kubernetes.io/projected/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-kube-api-access-cnhcn\") pod \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.982366 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-sg-core-conf-yaml\") pod \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.982403 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-run-httpd\") pod \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.982442 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-combined-ca-bundle\") pod \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.982553 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-config-data\") pod \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.982701 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-log-httpd\") pod \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\" (UID: \"68d5df3b-e1f1-4f80-880a-cae72acdf3f7\") " Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.983392 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68d5df3b-e1f1-4f80-880a-cae72acdf3f7" (UID: "68d5df3b-e1f1-4f80-880a-cae72acdf3f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.983563 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68d5df3b-e1f1-4f80-880a-cae72acdf3f7" (UID: "68d5df3b-e1f1-4f80-880a-cae72acdf3f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.988590 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-kube-api-access-cnhcn" (OuterVolumeSpecName: "kube-api-access-cnhcn") pod "68d5df3b-e1f1-4f80-880a-cae72acdf3f7" (UID: "68d5df3b-e1f1-4f80-880a-cae72acdf3f7"). InnerVolumeSpecName "kube-api-access-cnhcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:26 crc kubenswrapper[4767]: I1124 21:57:26.988668 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-scripts" (OuterVolumeSpecName: "scripts") pod "68d5df3b-e1f1-4f80-880a-cae72acdf3f7" (UID: "68d5df3b-e1f1-4f80-880a-cae72acdf3f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.010724 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68d5df3b-e1f1-4f80-880a-cae72acdf3f7" (UID: "68d5df3b-e1f1-4f80-880a-cae72acdf3f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.069439 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68d5df3b-e1f1-4f80-880a-cae72acdf3f7" (UID: "68d5df3b-e1f1-4f80-880a-cae72acdf3f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.087787 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.087819 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.087831 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnhcn\" (UniqueName: \"kubernetes.io/projected/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-kube-api-access-cnhcn\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.087842 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.087851 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.087860 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.096332 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-config-data" (OuterVolumeSpecName: "config-data") pod "68d5df3b-e1f1-4f80-880a-cae72acdf3f7" (UID: "68d5df3b-e1f1-4f80-880a-cae72acdf3f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.189371 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d5df3b-e1f1-4f80-880a-cae72acdf3f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.371218 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.413152 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.440450 4767 generic.go:334] "Generic (PLEG): container finished" podID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerID="dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f" exitCode=0 Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.441475 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.445819 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerDied","Data":"dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f"} Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.446007 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d5df3b-e1f1-4f80-880a-cae72acdf3f7","Type":"ContainerDied","Data":"54dac21bff61792c6a04766325f8333588f2a3c4902caa4330c47a0e681606f7"} Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.446099 4767 scope.go:117] "RemoveContainer" containerID="b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.479429 4767 scope.go:117] "RemoveContainer" containerID="8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.499066 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.511127 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.545468 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.545945 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.545964 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.545986 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-central-agent" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.545992 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-central-agent" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.546014 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="sg-core" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546020 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="sg-core" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.546028 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon-log" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546034 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon-log" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.546042 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-notification-agent" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546048 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-notification-agent" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.546057 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="proxy-httpd" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546063 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="proxy-httpd" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546299 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="sg-core" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546314 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="proxy-httpd" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546327 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546338 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-central-agent" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546346 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbdad0b-f4f8-4f8b-adb8-6a6a6ac192b1" containerName="horizon-log" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.546358 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" containerName="ceilometer-notification-agent" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.548289 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.551575 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.553202 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.557926 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.564106 4767 scope.go:117] "RemoveContainer" containerID="9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.602004 4767 scope.go:117] "RemoveContainer" containerID="dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.618241 4767 scope.go:117] "RemoveContainer" containerID="b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.618805 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1\": container with ID starting with b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1 not found: ID does not exist" containerID="b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.618853 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1"} err="failed to get container status \"b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1\": rpc error: code = NotFound desc = could not find container \"b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1\": container with ID starting with b60fd3717d8785e129f1f68e528561535624eb8558c169c79f228a71602affb1 not found: ID does not exist" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.618879 4767 scope.go:117] "RemoveContainer" containerID="8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.619470 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0\": container with ID starting with 8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0 not found: ID does not exist" containerID="8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.619508 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0"} err="failed to get container status \"8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0\": rpc error: code = NotFound desc = could not find container \"8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0\": container with ID starting with 8821bc4b9cc9fd77ba2ef082ff195a9c89f5bb51910ee5522ccdfc56a0cc6cc0 not found: ID does not exist" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.619566 4767 scope.go:117] "RemoveContainer" containerID="9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.620163 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e\": container with ID starting with 9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e not found: ID does not exist" containerID="9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.620195 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e"} err="failed to get container status \"9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e\": rpc error: code = NotFound desc = could not find container \"9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e\": container with ID starting with 9ad390c35d432505ea9831129d42ed16ec4daaaee08c44ef82cf080a8ddcea9e not found: ID does not exist" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.620214 4767 scope.go:117] "RemoveContainer" containerID="dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f" Nov 24 21:57:27 crc kubenswrapper[4767]: E1124 21:57:27.621211 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f\": container with ID starting with dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f not found: ID does not exist" containerID="dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.621235 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f"} err="failed to get container status \"dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f\": rpc error: code = NotFound desc = could not find container \"dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f\": container with ID starting with dd7e4f1e3ef3182044d50e3fd981386461bb9ef4fc91488b4941b387edcf299f not found: ID does not exist" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.699748 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.699801 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-config-data\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.699834 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-log-httpd\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.699892 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wr2\" (UniqueName: \"kubernetes.io/projected/49dd16d2-f832-438a-928e-37461b92d4c8-kube-api-access-z6wr2\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.700089 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.700196 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-scripts\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.700234 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-run-httpd\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.802509 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.802590 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-scripts\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.802630 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-run-httpd\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.802692 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.802726 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-config-data\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.802766 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-log-httpd\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.802875 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6wr2\" (UniqueName: \"kubernetes.io/projected/49dd16d2-f832-438a-928e-37461b92d4c8-kube-api-access-z6wr2\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.803232 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-run-httpd\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.803302 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-log-httpd\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.810032 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-scripts\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.811727 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.815049 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.816541 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-config-data\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.825124 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6wr2\" (UniqueName: \"kubernetes.io/projected/49dd16d2-f832-438a-928e-37461b92d4c8-kube-api-access-z6wr2\") pod \"ceilometer-0\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " pod="openstack/ceilometer-0" Nov 24 21:57:27 crc kubenswrapper[4767]: I1124 21:57:27.882924 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.327594 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d5df3b-e1f1-4f80-880a-cae72acdf3f7" path="/var/lib/kubelet/pods/68d5df3b-e1f1-4f80-880a-cae72acdf3f7/volumes" Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.332798 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:28 crc kubenswrapper[4767]: W1124 21:57:28.336737 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49dd16d2_f832_438a_928e_37461b92d4c8.slice/crio-cd26b89da0232d0ecf3e183e02c7c04d5c623e84ae1e09082412707809c78c7a WatchSource:0}: Error finding container cd26b89da0232d0ecf3e183e02c7c04d5c623e84ae1e09082412707809c78c7a: Status 404 returned error can't find the container with id cd26b89da0232d0ecf3e183e02c7c04d5c623e84ae1e09082412707809c78c7a Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.457053 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerStarted","Data":"cd26b89da0232d0ecf3e183e02c7c04d5c623e84ae1e09082412707809c78c7a"} Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.767354 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.767396 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.801956 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.813718 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:28 crc kubenswrapper[4767]: I1124 21:57:28.933288 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:29 crc kubenswrapper[4767]: I1124 21:57:29.472589 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerStarted","Data":"8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea"} Nov 24 21:57:29 crc kubenswrapper[4767]: I1124 21:57:29.472637 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:29 crc kubenswrapper[4767]: I1124 21:57:29.472789 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:30 crc kubenswrapper[4767]: I1124 21:57:30.483392 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerStarted","Data":"877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4"} Nov 24 21:57:30 crc kubenswrapper[4767]: I1124 21:57:30.484166 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerStarted","Data":"fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff"} Nov 24 21:57:31 crc kubenswrapper[4767]: I1124 21:57:31.480804 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:31 crc kubenswrapper[4767]: I1124 21:57:31.496045 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:57:31 crc kubenswrapper[4767]: I1124 21:57:31.555014 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:57:32 crc kubenswrapper[4767]: I1124 21:57:32.507839 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerStarted","Data":"5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe"} Nov 24 21:57:32 crc kubenswrapper[4767]: I1124 21:57:32.508102 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-central-agent" containerID="cri-o://8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea" gracePeriod=30 Nov 24 21:57:32 crc kubenswrapper[4767]: I1124 21:57:32.508246 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="sg-core" containerID="cri-o://877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4" gracePeriod=30 Nov 24 21:57:32 crc kubenswrapper[4767]: I1124 21:57:32.508184 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="proxy-httpd" containerID="cri-o://5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe" gracePeriod=30 Nov 24 21:57:32 crc kubenswrapper[4767]: I1124 21:57:32.508214 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-notification-agent" containerID="cri-o://fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff" gracePeriod=30 Nov 24 21:57:32 crc kubenswrapper[4767]: I1124 21:57:32.543797 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.36182674 podStartE2EDuration="5.543776563s" podCreationTimestamp="2025-11-24 21:57:27 +0000 UTC" firstStartedPulling="2025-11-24 21:57:28.338940136 +0000 UTC m=+1131.255923508" lastFinishedPulling="2025-11-24 21:57:31.520889959 +0000 UTC m=+1134.437873331" observedRunningTime="2025-11-24 21:57:32.537937908 +0000 UTC m=+1135.454921280" watchObservedRunningTime="2025-11-24 21:57:32.543776563 +0000 UTC m=+1135.460759935" Nov 24 21:57:33 crc kubenswrapper[4767]: I1124 21:57:33.519402 4767 generic.go:334] "Generic (PLEG): container finished" podID="49dd16d2-f832-438a-928e-37461b92d4c8" containerID="5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe" exitCode=0 Nov 24 21:57:33 crc kubenswrapper[4767]: I1124 21:57:33.519685 4767 generic.go:334] "Generic (PLEG): container finished" podID="49dd16d2-f832-438a-928e-37461b92d4c8" containerID="877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4" exitCode=2 Nov 24 21:57:33 crc kubenswrapper[4767]: I1124 21:57:33.519696 4767 generic.go:334] "Generic (PLEG): container finished" podID="49dd16d2-f832-438a-928e-37461b92d4c8" containerID="fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff" exitCode=0 Nov 24 21:57:33 crc kubenswrapper[4767]: I1124 21:57:33.519478 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerDied","Data":"5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe"} Nov 24 21:57:33 crc kubenswrapper[4767]: I1124 21:57:33.519785 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerDied","Data":"877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4"} Nov 24 21:57:33 crc kubenswrapper[4767]: I1124 21:57:33.519803 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerDied","Data":"fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff"} Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.103248 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-69dfn"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.104578 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.116990 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-69dfn"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.203321 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-h6qt4"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.204499 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.212753 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-586a-account-create-kr7s2"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.214073 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.215827 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.224803 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h6qt4"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.232196 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-586a-account-create-kr7s2"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.239987 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94rl\" (UniqueName: \"kubernetes.io/projected/50135ea4-cbb7-47f5-ad9d-6c039017bc47-kube-api-access-b94rl\") pod \"nova-api-db-create-69dfn\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.240110 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50135ea4-cbb7-47f5-ad9d-6c039017bc47-operator-scripts\") pod \"nova-api-db-create-69dfn\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.295719 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9c6sk"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.296830 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.307303 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9c6sk"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342258 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94rl\" (UniqueName: \"kubernetes.io/projected/50135ea4-cbb7-47f5-ad9d-6c039017bc47-kube-api-access-b94rl\") pod \"nova-api-db-create-69dfn\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342344 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stb7l\" (UniqueName: \"kubernetes.io/projected/6dedacc6-c898-4425-908b-6e94ae7bdc7f-kube-api-access-stb7l\") pod \"nova-api-586a-account-create-kr7s2\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342391 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5swd\" (UniqueName: \"kubernetes.io/projected/5aed67a4-e908-4066-b288-5f37c332a247-kube-api-access-l5swd\") pod \"nova-cell0-db-create-h6qt4\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5d6\" (UniqueName: \"kubernetes.io/projected/b153a138-74ce-4646-ae74-aba4aaa74152-kube-api-access-kw5d6\") pod \"nova-cell1-db-create-9c6sk\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342487 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50135ea4-cbb7-47f5-ad9d-6c039017bc47-operator-scripts\") pod \"nova-api-db-create-69dfn\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342542 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b153a138-74ce-4646-ae74-aba4aaa74152-operator-scripts\") pod \"nova-cell1-db-create-9c6sk\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342596 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dedacc6-c898-4425-908b-6e94ae7bdc7f-operator-scripts\") pod \"nova-api-586a-account-create-kr7s2\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.342636 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aed67a4-e908-4066-b288-5f37c332a247-operator-scripts\") pod \"nova-cell0-db-create-h6qt4\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.343670 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50135ea4-cbb7-47f5-ad9d-6c039017bc47-operator-scripts\") pod \"nova-api-db-create-69dfn\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.360503 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94rl\" (UniqueName: \"kubernetes.io/projected/50135ea4-cbb7-47f5-ad9d-6c039017bc47-kube-api-access-b94rl\") pod \"nova-api-db-create-69dfn\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.407767 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-978e-account-create-sfbws"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.409012 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.411323 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.423354 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.431358 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-978e-account-create-sfbws"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.443822 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b153a138-74ce-4646-ae74-aba4aaa74152-operator-scripts\") pod \"nova-cell1-db-create-9c6sk\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.443961 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dedacc6-c898-4425-908b-6e94ae7bdc7f-operator-scripts\") pod \"nova-api-586a-account-create-kr7s2\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.444029 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aed67a4-e908-4066-b288-5f37c332a247-operator-scripts\") pod \"nova-cell0-db-create-h6qt4\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.444118 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stb7l\" (UniqueName: \"kubernetes.io/projected/6dedacc6-c898-4425-908b-6e94ae7bdc7f-kube-api-access-stb7l\") pod \"nova-api-586a-account-create-kr7s2\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.444158 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5swd\" (UniqueName: \"kubernetes.io/projected/5aed67a4-e908-4066-b288-5f37c332a247-kube-api-access-l5swd\") pod \"nova-cell0-db-create-h6qt4\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.444196 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw5d6\" (UniqueName: \"kubernetes.io/projected/b153a138-74ce-4646-ae74-aba4aaa74152-kube-api-access-kw5d6\") pod \"nova-cell1-db-create-9c6sk\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.447172 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b153a138-74ce-4646-ae74-aba4aaa74152-operator-scripts\") pod \"nova-cell1-db-create-9c6sk\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.448494 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dedacc6-c898-4425-908b-6e94ae7bdc7f-operator-scripts\") pod \"nova-api-586a-account-create-kr7s2\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.449066 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aed67a4-e908-4066-b288-5f37c332a247-operator-scripts\") pod \"nova-cell0-db-create-h6qt4\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.462361 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw5d6\" (UniqueName: \"kubernetes.io/projected/b153a138-74ce-4646-ae74-aba4aaa74152-kube-api-access-kw5d6\") pod \"nova-cell1-db-create-9c6sk\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.465410 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stb7l\" (UniqueName: \"kubernetes.io/projected/6dedacc6-c898-4425-908b-6e94ae7bdc7f-kube-api-access-stb7l\") pod \"nova-api-586a-account-create-kr7s2\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.466992 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5swd\" (UniqueName: \"kubernetes.io/projected/5aed67a4-e908-4066-b288-5f37c332a247-kube-api-access-l5swd\") pod \"nova-cell0-db-create-h6qt4\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.526789 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.540047 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.545979 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-operator-scripts\") pod \"nova-cell0-978e-account-create-sfbws\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.546041 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktl8\" (UniqueName: \"kubernetes.io/projected/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-kube-api-access-rktl8\") pod \"nova-cell0-978e-account-create-sfbws\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.623717 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.633197 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-edf6-account-create-9hkk9"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.634782 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.637711 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.652756 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-operator-scripts\") pod \"nova-cell0-978e-account-create-sfbws\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.655234 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rktl8\" (UniqueName: \"kubernetes.io/projected/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-kube-api-access-rktl8\") pod \"nova-cell0-978e-account-create-sfbws\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.656155 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-operator-scripts\") pod \"nova-cell0-978e-account-create-sfbws\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.678813 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-edf6-account-create-9hkk9"] Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.701047 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rktl8\" (UniqueName: \"kubernetes.io/projected/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-kube-api-access-rktl8\") pod \"nova-cell0-978e-account-create-sfbws\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.730049 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.758703 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-operator-scripts\") pod \"nova-cell1-edf6-account-create-9hkk9\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.758774 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tv6v\" (UniqueName: \"kubernetes.io/projected/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-kube-api-access-7tv6v\") pod \"nova-cell1-edf6-account-create-9hkk9\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.860163 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-operator-scripts\") pod \"nova-cell1-edf6-account-create-9hkk9\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.860229 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tv6v\" (UniqueName: \"kubernetes.io/projected/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-kube-api-access-7tv6v\") pod \"nova-cell1-edf6-account-create-9hkk9\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.861094 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-operator-scripts\") pod \"nova-cell1-edf6-account-create-9hkk9\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.885859 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tv6v\" (UniqueName: \"kubernetes.io/projected/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-kube-api-access-7tv6v\") pod \"nova-cell1-edf6-account-create-9hkk9\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:34 crc kubenswrapper[4767]: I1124 21:57:34.966257 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-69dfn"] Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.006668 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.183352 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-586a-account-create-kr7s2"] Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.191024 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h6qt4"] Nov 24 21:57:35 crc kubenswrapper[4767]: W1124 21:57:35.201676 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aed67a4_e908_4066_b288_5f37c332a247.slice/crio-0f100556313ccd2bff19ccb3d84c6aebc0f6383073502e40ce5c8cc4f3a1eb5e WatchSource:0}: Error finding container 0f100556313ccd2bff19ccb3d84c6aebc0f6383073502e40ce5c8cc4f3a1eb5e: Status 404 returned error can't find the container with id 0f100556313ccd2bff19ccb3d84c6aebc0f6383073502e40ce5c8cc4f3a1eb5e Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.287309 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9c6sk"] Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.367905 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-978e-account-create-sfbws"] Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.481325 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.481374 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.481414 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.482164 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2a57db0a7357f691890d9ae543dd8c8e63ac1b14aa419c6ceaa2fe9ae17ceb2"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.482220 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://b2a57db0a7357f691890d9ae543dd8c8e63ac1b14aa419c6ceaa2fe9ae17ceb2" gracePeriod=600 Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.547133 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h6qt4" event={"ID":"5aed67a4-e908-4066-b288-5f37c332a247","Type":"ContainerStarted","Data":"032b605b6005bc68182099ff68a621cf37c89558022e15ec22a5792693c08d72"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.547174 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h6qt4" event={"ID":"5aed67a4-e908-4066-b288-5f37c332a247","Type":"ContainerStarted","Data":"0f100556313ccd2bff19ccb3d84c6aebc0f6383073502e40ce5c8cc4f3a1eb5e"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.548776 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9c6sk" event={"ID":"b153a138-74ce-4646-ae74-aba4aaa74152","Type":"ContainerStarted","Data":"f043cc66b72dfbd9dff912f950713bf507c9d8c3666af9108bfe411ba2068ef7"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.551695 4767 generic.go:334] "Generic (PLEG): container finished" podID="50135ea4-cbb7-47f5-ad9d-6c039017bc47" containerID="2d2c973664ce878aa4fcc964244344d9a4f82869ff14be8ad42d2477b13d0f3d" exitCode=0 Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.551931 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-69dfn" event={"ID":"50135ea4-cbb7-47f5-ad9d-6c039017bc47","Type":"ContainerDied","Data":"2d2c973664ce878aa4fcc964244344d9a4f82869ff14be8ad42d2477b13d0f3d"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.552009 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-69dfn" event={"ID":"50135ea4-cbb7-47f5-ad9d-6c039017bc47","Type":"ContainerStarted","Data":"6462961e412bfcde9d48cc3280b69759936035eb3cc01258c32c0c394d41bd6c"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.553134 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-edf6-account-create-9hkk9"] Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.553355 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-978e-account-create-sfbws" event={"ID":"23042e8e-dbe2-4fa2-adda-ebd1b50512ec","Type":"ContainerStarted","Data":"2f2cf56e29a57b51131940e132bba66ee3a4f7fc1826af01980cfcec97bb6476"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.556182 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-586a-account-create-kr7s2" event={"ID":"6dedacc6-c898-4425-908b-6e94ae7bdc7f","Type":"ContainerStarted","Data":"027aca7d1aaad75210a05dca37006d5ffba5db063e9124177c610128aae07904"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.556241 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-586a-account-create-kr7s2" event={"ID":"6dedacc6-c898-4425-908b-6e94ae7bdc7f","Type":"ContainerStarted","Data":"be8bb517bf3c8ac19fc3e4b98afb3ec5fb010ac11a8e3cd8e03d784445b71d53"} Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.566593 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-h6qt4" podStartSLOduration=1.566573907 podStartE2EDuration="1.566573907s" podCreationTimestamp="2025-11-24 21:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:57:35.561323558 +0000 UTC m=+1138.478306950" watchObservedRunningTime="2025-11-24 21:57:35.566573907 +0000 UTC m=+1138.483557269" Nov 24 21:57:35 crc kubenswrapper[4767]: I1124 21:57:35.596392 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-586a-account-create-kr7s2" podStartSLOduration=1.596374461 podStartE2EDuration="1.596374461s" podCreationTimestamp="2025-11-24 21:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:57:35.593577932 +0000 UTC m=+1138.510561314" watchObservedRunningTime="2025-11-24 21:57:35.596374461 +0000 UTC m=+1138.513357843" Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.575813 4767 generic.go:334] "Generic (PLEG): container finished" podID="5aed67a4-e908-4066-b288-5f37c332a247" containerID="032b605b6005bc68182099ff68a621cf37c89558022e15ec22a5792693c08d72" exitCode=0 Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.575932 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h6qt4" event={"ID":"5aed67a4-e908-4066-b288-5f37c332a247","Type":"ContainerDied","Data":"032b605b6005bc68182099ff68a621cf37c89558022e15ec22a5792693c08d72"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.579091 4767 generic.go:334] "Generic (PLEG): container finished" podID="b153a138-74ce-4646-ae74-aba4aaa74152" containerID="c13ee123e0ce6c93c3fe2b74b07f63baa0a9ee5c34376f6cc2b9266bcb70ce6b" exitCode=0 Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.579148 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9c6sk" event={"ID":"b153a138-74ce-4646-ae74-aba4aaa74152","Type":"ContainerDied","Data":"c13ee123e0ce6c93c3fe2b74b07f63baa0a9ee5c34376f6cc2b9266bcb70ce6b"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.582613 4767 generic.go:334] "Generic (PLEG): container finished" podID="23042e8e-dbe2-4fa2-adda-ebd1b50512ec" containerID="f7bd322832692386e3a702e96fc9bd66d1c4ffb6fe48047207f8125835152664" exitCode=0 Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.582697 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-978e-account-create-sfbws" event={"ID":"23042e8e-dbe2-4fa2-adda-ebd1b50512ec","Type":"ContainerDied","Data":"f7bd322832692386e3a702e96fc9bd66d1c4ffb6fe48047207f8125835152664"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.584297 4767 generic.go:334] "Generic (PLEG): container finished" podID="f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f" containerID="3d3f7e830cb13ff4e4ed92e28364f123ba44a047edc9e3c106582193c50a97dd" exitCode=0 Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.584396 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-edf6-account-create-9hkk9" event={"ID":"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f","Type":"ContainerDied","Data":"3d3f7e830cb13ff4e4ed92e28364f123ba44a047edc9e3c106582193c50a97dd"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.584432 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-edf6-account-create-9hkk9" event={"ID":"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f","Type":"ContainerStarted","Data":"475a364bdbb9752ab3451b77c274fe9581aade4c6d521f6055d3599bd6d66888"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.585760 4767 generic.go:334] "Generic (PLEG): container finished" podID="6dedacc6-c898-4425-908b-6e94ae7bdc7f" containerID="027aca7d1aaad75210a05dca37006d5ffba5db063e9124177c610128aae07904" exitCode=0 Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.585825 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-586a-account-create-kr7s2" event={"ID":"6dedacc6-c898-4425-908b-6e94ae7bdc7f","Type":"ContainerDied","Data":"027aca7d1aaad75210a05dca37006d5ffba5db063e9124177c610128aae07904"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.588549 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="b2a57db0a7357f691890d9ae543dd8c8e63ac1b14aa419c6ceaa2fe9ae17ceb2" exitCode=0 Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.588811 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"b2a57db0a7357f691890d9ae543dd8c8e63ac1b14aa419c6ceaa2fe9ae17ceb2"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.588844 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"cb71cfb4f27344cb7cceaf9ac7651774b144254e6ab13360f5b5c998afd38e04"} Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.588873 4767 scope.go:117] "RemoveContainer" containerID="e688e489a883e7391dd101f5a5646e7206f88c9971f33a2eee17c7b8ffed628d" Nov 24 21:57:36 crc kubenswrapper[4767]: I1124 21:57:36.952294 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.011151 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50135ea4-cbb7-47f5-ad9d-6c039017bc47-operator-scripts\") pod \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.011452 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b94rl\" (UniqueName: \"kubernetes.io/projected/50135ea4-cbb7-47f5-ad9d-6c039017bc47-kube-api-access-b94rl\") pod \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\" (UID: \"50135ea4-cbb7-47f5-ad9d-6c039017bc47\") " Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.011916 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50135ea4-cbb7-47f5-ad9d-6c039017bc47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50135ea4-cbb7-47f5-ad9d-6c039017bc47" (UID: "50135ea4-cbb7-47f5-ad9d-6c039017bc47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.019147 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50135ea4-cbb7-47f5-ad9d-6c039017bc47-kube-api-access-b94rl" (OuterVolumeSpecName: "kube-api-access-b94rl") pod "50135ea4-cbb7-47f5-ad9d-6c039017bc47" (UID: "50135ea4-cbb7-47f5-ad9d-6c039017bc47"). InnerVolumeSpecName "kube-api-access-b94rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.113973 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b94rl\" (UniqueName: \"kubernetes.io/projected/50135ea4-cbb7-47f5-ad9d-6c039017bc47-kube-api-access-b94rl\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.114033 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50135ea4-cbb7-47f5-ad9d-6c039017bc47-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.605324 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-69dfn" event={"ID":"50135ea4-cbb7-47f5-ad9d-6c039017bc47","Type":"ContainerDied","Data":"6462961e412bfcde9d48cc3280b69759936035eb3cc01258c32c0c394d41bd6c"} Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.605397 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6462961e412bfcde9d48cc3280b69759936035eb3cc01258c32c0c394d41bd6c" Nov 24 21:57:37 crc kubenswrapper[4767]: I1124 21:57:37.605334 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-69dfn" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.127991 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.234766 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-operator-scripts\") pod \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.234987 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tv6v\" (UniqueName: \"kubernetes.io/projected/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-kube-api-access-7tv6v\") pod \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\" (UID: \"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.235679 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f" (UID: "f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.247339 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-kube-api-access-7tv6v" (OuterVolumeSpecName: "kube-api-access-7tv6v") pod "f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f" (UID: "f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f"). InnerVolumeSpecName "kube-api-access-7tv6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.330990 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.337748 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.337778 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tv6v\" (UniqueName: \"kubernetes.io/projected/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f-kube-api-access-7tv6v\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.353655 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.356368 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.370664 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.438685 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stb7l\" (UniqueName: \"kubernetes.io/projected/6dedacc6-c898-4425-908b-6e94ae7bdc7f-kube-api-access-stb7l\") pod \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.438811 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dedacc6-c898-4425-908b-6e94ae7bdc7f-operator-scripts\") pod \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\" (UID: \"6dedacc6-c898-4425-908b-6e94ae7bdc7f\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.438841 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-operator-scripts\") pod \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.438902 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw5d6\" (UniqueName: \"kubernetes.io/projected/b153a138-74ce-4646-ae74-aba4aaa74152-kube-api-access-kw5d6\") pod \"b153a138-74ce-4646-ae74-aba4aaa74152\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.438926 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b153a138-74ce-4646-ae74-aba4aaa74152-operator-scripts\") pod \"b153a138-74ce-4646-ae74-aba4aaa74152\" (UID: \"b153a138-74ce-4646-ae74-aba4aaa74152\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.438987 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5swd\" (UniqueName: \"kubernetes.io/projected/5aed67a4-e908-4066-b288-5f37c332a247-kube-api-access-l5swd\") pod \"5aed67a4-e908-4066-b288-5f37c332a247\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.439016 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aed67a4-e908-4066-b288-5f37c332a247-operator-scripts\") pod \"5aed67a4-e908-4066-b288-5f37c332a247\" (UID: \"5aed67a4-e908-4066-b288-5f37c332a247\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.439036 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rktl8\" (UniqueName: \"kubernetes.io/projected/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-kube-api-access-rktl8\") pod \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\" (UID: \"23042e8e-dbe2-4fa2-adda-ebd1b50512ec\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.439727 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23042e8e-dbe2-4fa2-adda-ebd1b50512ec" (UID: "23042e8e-dbe2-4fa2-adda-ebd1b50512ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.439768 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b153a138-74ce-4646-ae74-aba4aaa74152-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b153a138-74ce-4646-ae74-aba4aaa74152" (UID: "b153a138-74ce-4646-ae74-aba4aaa74152"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.440493 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aed67a4-e908-4066-b288-5f37c332a247-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5aed67a4-e908-4066-b288-5f37c332a247" (UID: "5aed67a4-e908-4066-b288-5f37c332a247"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.440530 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dedacc6-c898-4425-908b-6e94ae7bdc7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dedacc6-c898-4425-908b-6e94ae7bdc7f" (UID: "6dedacc6-c898-4425-908b-6e94ae7bdc7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.442965 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-kube-api-access-rktl8" (OuterVolumeSpecName: "kube-api-access-rktl8") pod "23042e8e-dbe2-4fa2-adda-ebd1b50512ec" (UID: "23042e8e-dbe2-4fa2-adda-ebd1b50512ec"). InnerVolumeSpecName "kube-api-access-rktl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.443822 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dedacc6-c898-4425-908b-6e94ae7bdc7f-kube-api-access-stb7l" (OuterVolumeSpecName: "kube-api-access-stb7l") pod "6dedacc6-c898-4425-908b-6e94ae7bdc7f" (UID: "6dedacc6-c898-4425-908b-6e94ae7bdc7f"). InnerVolumeSpecName "kube-api-access-stb7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.444208 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aed67a4-e908-4066-b288-5f37c332a247-kube-api-access-l5swd" (OuterVolumeSpecName: "kube-api-access-l5swd") pod "5aed67a4-e908-4066-b288-5f37c332a247" (UID: "5aed67a4-e908-4066-b288-5f37c332a247"). InnerVolumeSpecName "kube-api-access-l5swd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.444292 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b153a138-74ce-4646-ae74-aba4aaa74152-kube-api-access-kw5d6" (OuterVolumeSpecName: "kube-api-access-kw5d6") pod "b153a138-74ce-4646-ae74-aba4aaa74152" (UID: "b153a138-74ce-4646-ae74-aba4aaa74152"). InnerVolumeSpecName "kube-api-access-kw5d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.479360 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.540320 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-sg-core-conf-yaml\") pod \"49dd16d2-f832-438a-928e-37461b92d4c8\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.540369 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6wr2\" (UniqueName: \"kubernetes.io/projected/49dd16d2-f832-438a-928e-37461b92d4c8-kube-api-access-z6wr2\") pod \"49dd16d2-f832-438a-928e-37461b92d4c8\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.540424 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-scripts\") pod \"49dd16d2-f832-438a-928e-37461b92d4c8\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.540557 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-combined-ca-bundle\") pod \"49dd16d2-f832-438a-928e-37461b92d4c8\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.540629 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-run-httpd\") pod \"49dd16d2-f832-438a-928e-37461b92d4c8\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.540673 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-log-httpd\") pod \"49dd16d2-f832-438a-928e-37461b92d4c8\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.540715 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-config-data\") pod \"49dd16d2-f832-438a-928e-37461b92d4c8\" (UID: \"49dd16d2-f832-438a-928e-37461b92d4c8\") " Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.541004 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49dd16d2-f832-438a-928e-37461b92d4c8" (UID: "49dd16d2-f832-438a-928e-37461b92d4c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.541325 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49dd16d2-f832-438a-928e-37461b92d4c8" (UID: "49dd16d2-f832-438a-928e-37461b92d4c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.541964 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dedacc6-c898-4425-908b-6e94ae7bdc7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.541986 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.541999 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.542013 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw5d6\" (UniqueName: \"kubernetes.io/projected/b153a138-74ce-4646-ae74-aba4aaa74152-kube-api-access-kw5d6\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.542027 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49dd16d2-f832-438a-928e-37461b92d4c8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.542040 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b153a138-74ce-4646-ae74-aba4aaa74152-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.542051 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5swd\" (UniqueName: \"kubernetes.io/projected/5aed67a4-e908-4066-b288-5f37c332a247-kube-api-access-l5swd\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.542062 4767 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aed67a4-e908-4066-b288-5f37c332a247-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.542074 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rktl8\" (UniqueName: \"kubernetes.io/projected/23042e8e-dbe2-4fa2-adda-ebd1b50512ec-kube-api-access-rktl8\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.542087 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stb7l\" (UniqueName: \"kubernetes.io/projected/6dedacc6-c898-4425-908b-6e94ae7bdc7f-kube-api-access-stb7l\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.543684 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dd16d2-f832-438a-928e-37461b92d4c8-kube-api-access-z6wr2" (OuterVolumeSpecName: "kube-api-access-z6wr2") pod "49dd16d2-f832-438a-928e-37461b92d4c8" (UID: "49dd16d2-f832-438a-928e-37461b92d4c8"). InnerVolumeSpecName "kube-api-access-z6wr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.544164 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-scripts" (OuterVolumeSpecName: "scripts") pod "49dd16d2-f832-438a-928e-37461b92d4c8" (UID: "49dd16d2-f832-438a-928e-37461b92d4c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.568802 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49dd16d2-f832-438a-928e-37461b92d4c8" (UID: "49dd16d2-f832-438a-928e-37461b92d4c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.624878 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-586a-account-create-kr7s2" event={"ID":"6dedacc6-c898-4425-908b-6e94ae7bdc7f","Type":"ContainerDied","Data":"be8bb517bf3c8ac19fc3e4b98afb3ec5fb010ac11a8e3cd8e03d784445b71d53"} Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.624916 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8bb517bf3c8ac19fc3e4b98afb3ec5fb010ac11a8e3cd8e03d784445b71d53" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.624961 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-586a-account-create-kr7s2" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.633391 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h6qt4" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.633492 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h6qt4" event={"ID":"5aed67a4-e908-4066-b288-5f37c332a247","Type":"ContainerDied","Data":"0f100556313ccd2bff19ccb3d84c6aebc0f6383073502e40ce5c8cc4f3a1eb5e"} Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.633548 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f100556313ccd2bff19ccb3d84c6aebc0f6383073502e40ce5c8cc4f3a1eb5e" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.635705 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9c6sk" event={"ID":"b153a138-74ce-4646-ae74-aba4aaa74152","Type":"ContainerDied","Data":"f043cc66b72dfbd9dff912f950713bf507c9d8c3666af9108bfe411ba2068ef7"} Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.635736 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9c6sk" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.635750 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f043cc66b72dfbd9dff912f950713bf507c9d8c3666af9108bfe411ba2068ef7" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.638291 4767 generic.go:334] "Generic (PLEG): container finished" podID="49dd16d2-f832-438a-928e-37461b92d4c8" containerID="8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea" exitCode=0 Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.638349 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerDied","Data":"8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea"} Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.638373 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49dd16d2-f832-438a-928e-37461b92d4c8","Type":"ContainerDied","Data":"cd26b89da0232d0ecf3e183e02c7c04d5c623e84ae1e09082412707809c78c7a"} Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.638391 4767 scope.go:117] "RemoveContainer" containerID="5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.638517 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.639349 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49dd16d2-f832-438a-928e-37461b92d4c8" (UID: "49dd16d2-f832-438a-928e-37461b92d4c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.641085 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-978e-account-create-sfbws" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.641133 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-978e-account-create-sfbws" event={"ID":"23042e8e-dbe2-4fa2-adda-ebd1b50512ec","Type":"ContainerDied","Data":"2f2cf56e29a57b51131940e132bba66ee3a4f7fc1826af01980cfcec97bb6476"} Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.641159 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2cf56e29a57b51131940e132bba66ee3a4f7fc1826af01980cfcec97bb6476" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.643623 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-edf6-account-create-9hkk9" event={"ID":"f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f","Type":"ContainerDied","Data":"475a364bdbb9752ab3451b77c274fe9581aade4c6d521f6055d3599bd6d66888"} Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.643647 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475a364bdbb9752ab3451b77c274fe9581aade4c6d521f6055d3599bd6d66888" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.643682 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edf6-account-create-9hkk9" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.645099 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.645118 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.645126 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.645135 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6wr2\" (UniqueName: \"kubernetes.io/projected/49dd16d2-f832-438a-928e-37461b92d4c8-kube-api-access-z6wr2\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.666240 4767 scope.go:117] "RemoveContainer" containerID="877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.677766 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-config-data" (OuterVolumeSpecName: "config-data") pod "49dd16d2-f832-438a-928e-37461b92d4c8" (UID: "49dd16d2-f832-438a-928e-37461b92d4c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.687590 4767 scope.go:117] "RemoveContainer" containerID="fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.709592 4767 scope.go:117] "RemoveContainer" containerID="8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.727643 4767 scope.go:117] "RemoveContainer" containerID="5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe" Nov 24 21:57:38 crc kubenswrapper[4767]: E1124 21:57:38.728015 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe\": container with ID starting with 5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe not found: ID does not exist" containerID="5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.728061 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe"} err="failed to get container status \"5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe\": rpc error: code = NotFound desc = could not find container \"5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe\": container with ID starting with 5d8985cde1366a60d320767f80c1867d9b7972c5178473c31c4ae3a2597969fe not found: ID does not exist" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.728093 4767 scope.go:117] "RemoveContainer" containerID="877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4" Nov 24 21:57:38 crc kubenswrapper[4767]: E1124 21:57:38.728537 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4\": container with ID starting with 877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4 not found: ID does not exist" containerID="877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.728576 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4"} err="failed to get container status \"877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4\": rpc error: code = NotFound desc = could not find container \"877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4\": container with ID starting with 877482392e3aa999fe82ada1bf0f7a912aeab2a251b00458900e975102186ef4 not found: ID does not exist" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.728598 4767 scope.go:117] "RemoveContainer" containerID="fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff" Nov 24 21:57:38 crc kubenswrapper[4767]: E1124 21:57:38.729327 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff\": container with ID starting with fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff not found: ID does not exist" containerID="fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.729715 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff"} err="failed to get container status \"fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff\": rpc error: code = NotFound desc = could not find container \"fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff\": container with ID starting with fe2bcb32542720d6f70d6f4967fa7f2954c4e1716bb70960a530d71a69d617ff not found: ID does not exist" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.729753 4767 scope.go:117] "RemoveContainer" containerID="8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea" Nov 24 21:57:38 crc kubenswrapper[4767]: E1124 21:57:38.730080 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea\": container with ID starting with 8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea not found: ID does not exist" containerID="8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.730130 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea"} err="failed to get container status \"8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea\": rpc error: code = NotFound desc = could not find container \"8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea\": container with ID starting with 8e40cb5a16a48fe76af09700330910c3ddd3d862dcb89b057812c33788f05cea not found: ID does not exist" Nov 24 21:57:38 crc kubenswrapper[4767]: I1124 21:57:38.746545 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd16d2-f832-438a-928e-37461b92d4c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.012763 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.032442 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.044900 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045316 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23042e8e-dbe2-4fa2-adda-ebd1b50512ec" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045334 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="23042e8e-dbe2-4fa2-adda-ebd1b50512ec" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045354 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="proxy-httpd" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045363 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="proxy-httpd" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045380 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dedacc6-c898-4425-908b-6e94ae7bdc7f" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045386 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dedacc6-c898-4425-908b-6e94ae7bdc7f" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045396 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-notification-agent" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045402 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-notification-agent" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045414 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="sg-core" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045420 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="sg-core" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045430 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50135ea4-cbb7-47f5-ad9d-6c039017bc47" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045436 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="50135ea4-cbb7-47f5-ad9d-6c039017bc47" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045447 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-central-agent" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045453 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-central-agent" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045462 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045467 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045477 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aed67a4-e908-4066-b288-5f37c332a247" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045483 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aed67a4-e908-4066-b288-5f37c332a247" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: E1124 21:57:39.045495 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b153a138-74ce-4646-ae74-aba4aaa74152" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045501 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b153a138-74ce-4646-ae74-aba4aaa74152" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045663 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045677 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aed67a4-e908-4066-b288-5f37c332a247" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045692 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="23042e8e-dbe2-4fa2-adda-ebd1b50512ec" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045702 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="sg-core" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045714 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b153a138-74ce-4646-ae74-aba4aaa74152" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045722 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="50135ea4-cbb7-47f5-ad9d-6c039017bc47" containerName="mariadb-database-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045732 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dedacc6-c898-4425-908b-6e94ae7bdc7f" containerName="mariadb-account-create" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045744 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-notification-agent" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045753 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="proxy-httpd" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.045760 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" containerName="ceilometer-central-agent" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.047968 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.050029 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.050283 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.055806 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.154440 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-run-httpd\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.154491 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58trz\" (UniqueName: \"kubernetes.io/projected/c82b4116-dd73-4647-980f-e388c7a60f59-kube-api-access-58trz\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.154531 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.154564 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-scripts\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.154611 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-config-data\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.154643 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.154704 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-log-httpd\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.256614 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-config-data\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.256681 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.256744 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-log-httpd\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.256798 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-run-httpd\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.256838 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58trz\" (UniqueName: \"kubernetes.io/projected/c82b4116-dd73-4647-980f-e388c7a60f59-kube-api-access-58trz\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.256865 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.256903 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-scripts\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.257442 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-run-httpd\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.257461 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-log-httpd\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.261365 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.261486 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-scripts\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.262215 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-config-data\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.263000 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.290120 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58trz\" (UniqueName: \"kubernetes.io/projected/c82b4116-dd73-4647-980f-e388c7a60f59-kube-api-access-58trz\") pod \"ceilometer-0\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.431511 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:57:39 crc kubenswrapper[4767]: I1124 21:57:39.909168 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:57:39 crc kubenswrapper[4767]: W1124 21:57:39.910505 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82b4116_dd73_4647_980f_e388c7a60f59.slice/crio-630408c259401881ed7962d91cd4d7ee74eff2981e6d5ae32081897fb67d26ce WatchSource:0}: Error finding container 630408c259401881ed7962d91cd4d7ee74eff2981e6d5ae32081897fb67d26ce: Status 404 returned error can't find the container with id 630408c259401881ed7962d91cd4d7ee74eff2981e6d5ae32081897fb67d26ce Nov 24 21:57:40 crc kubenswrapper[4767]: I1124 21:57:40.332776 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49dd16d2-f832-438a-928e-37461b92d4c8" path="/var/lib/kubelet/pods/49dd16d2-f832-438a-928e-37461b92d4c8/volumes" Nov 24 21:57:40 crc kubenswrapper[4767]: I1124 21:57:40.668552 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerStarted","Data":"630408c259401881ed7962d91cd4d7ee74eff2981e6d5ae32081897fb67d26ce"} Nov 24 21:57:41 crc kubenswrapper[4767]: I1124 21:57:41.681364 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerStarted","Data":"f2321f8e86d65406769e2fffcd76004fd79954738abb97d013d5bb230471e8ca"} Nov 24 21:57:41 crc kubenswrapper[4767]: I1124 21:57:41.682222 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerStarted","Data":"25cbae73cb7419e2345f3343176400fe9aebf605477266ddf29feeedb189e627"} Nov 24 21:57:42 crc kubenswrapper[4767]: I1124 21:57:42.695329 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerStarted","Data":"d892439281477df6b9865e04f396e0c182f4585b97a95c6ecd7b91d6e2c059bc"} Nov 24 21:57:43 crc kubenswrapper[4767]: I1124 21:57:43.712165 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerStarted","Data":"b0a892ea33e6c3fcc743f10cb988ff1fb223eafedfea569778a179cf697c2086"} Nov 24 21:57:43 crc kubenswrapper[4767]: I1124 21:57:43.712946 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.703383 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.395014084 podStartE2EDuration="5.703357356s" podCreationTimestamp="2025-11-24 21:57:39 +0000 UTC" firstStartedPulling="2025-11-24 21:57:39.91602012 +0000 UTC m=+1142.833003492" lastFinishedPulling="2025-11-24 21:57:43.224363382 +0000 UTC m=+1146.141346764" observedRunningTime="2025-11-24 21:57:43.742791257 +0000 UTC m=+1146.659774639" watchObservedRunningTime="2025-11-24 21:57:44.703357356 +0000 UTC m=+1147.620340748" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.722029 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qb7jt"] Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.723598 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.731761 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zmdj6" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.731902 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.734603 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.741425 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qb7jt"] Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.762389 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5vx\" (UniqueName: \"kubernetes.io/projected/bf29d2a6-46ff-45c9-8da3-12d043fd287d-kube-api-access-rj5vx\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.762476 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-scripts\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.762528 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-config-data\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.762781 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.864161 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj5vx\" (UniqueName: \"kubernetes.io/projected/bf29d2a6-46ff-45c9-8da3-12d043fd287d-kube-api-access-rj5vx\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.864237 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-scripts\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.864314 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-config-data\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.864403 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.870587 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-config-data\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.873133 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.874683 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-scripts\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:44 crc kubenswrapper[4767]: I1124 21:57:44.883633 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj5vx\" (UniqueName: \"kubernetes.io/projected/bf29d2a6-46ff-45c9-8da3-12d043fd287d-kube-api-access-rj5vx\") pod \"nova-cell0-conductor-db-sync-qb7jt\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:45 crc kubenswrapper[4767]: I1124 21:57:45.052635 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:57:45 crc kubenswrapper[4767]: I1124 21:57:45.534909 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qb7jt"] Nov 24 21:57:45 crc kubenswrapper[4767]: W1124 21:57:45.535982 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf29d2a6_46ff_45c9_8da3_12d043fd287d.slice/crio-ea9df2305bfafddfcd020bed4e1ec19ecb40fac650447aaaaf2b891b085e65b4 WatchSource:0}: Error finding container ea9df2305bfafddfcd020bed4e1ec19ecb40fac650447aaaaf2b891b085e65b4: Status 404 returned error can't find the container with id ea9df2305bfafddfcd020bed4e1ec19ecb40fac650447aaaaf2b891b085e65b4 Nov 24 21:57:45 crc kubenswrapper[4767]: I1124 21:57:45.751503 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" event={"ID":"bf29d2a6-46ff-45c9-8da3-12d043fd287d","Type":"ContainerStarted","Data":"ea9df2305bfafddfcd020bed4e1ec19ecb40fac650447aaaaf2b891b085e65b4"} Nov 24 21:57:47 crc kubenswrapper[4767]: E1124 21:57:47.097580 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e2dc5b_82ce_4ce5_8fb5_b4e52232140f.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:57:52 crc kubenswrapper[4767]: I1124 21:57:52.830789 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" event={"ID":"bf29d2a6-46ff-45c9-8da3-12d043fd287d","Type":"ContainerStarted","Data":"293c137cc66435b7cac810cc7a19f066e20365fd15842efb5c17f85ea0fdd8cd"} Nov 24 21:57:52 crc kubenswrapper[4767]: I1124 21:57:52.867907 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" podStartSLOduration=2.63387273 podStartE2EDuration="8.867885155s" podCreationTimestamp="2025-11-24 21:57:44 +0000 UTC" firstStartedPulling="2025-11-24 21:57:45.537983428 +0000 UTC m=+1148.454966800" lastFinishedPulling="2025-11-24 21:57:51.771995843 +0000 UTC m=+1154.688979225" observedRunningTime="2025-11-24 21:57:52.855977048 +0000 UTC m=+1155.772960470" watchObservedRunningTime="2025-11-24 21:57:52.867885155 +0000 UTC m=+1155.784868537" Nov 24 21:57:57 crc kubenswrapper[4767]: E1124 21:57:57.479005 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e2dc5b_82ce_4ce5_8fb5_b4e52232140f.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:58:00 crc kubenswrapper[4767]: I1124 21:58:00.932452 4767 generic.go:334] "Generic (PLEG): container finished" podID="bf29d2a6-46ff-45c9-8da3-12d043fd287d" containerID="293c137cc66435b7cac810cc7a19f066e20365fd15842efb5c17f85ea0fdd8cd" exitCode=0 Nov 24 21:58:00 crc kubenswrapper[4767]: I1124 21:58:00.932538 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" event={"ID":"bf29d2a6-46ff-45c9-8da3-12d043fd287d","Type":"ContainerDied","Data":"293c137cc66435b7cac810cc7a19f066e20365fd15842efb5c17f85ea0fdd8cd"} Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.323461 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.449442 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-scripts\") pod \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.449538 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-combined-ca-bundle\") pod \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.449611 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-config-data\") pod \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.449673 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj5vx\" (UniqueName: \"kubernetes.io/projected/bf29d2a6-46ff-45c9-8da3-12d043fd287d-kube-api-access-rj5vx\") pod \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\" (UID: \"bf29d2a6-46ff-45c9-8da3-12d043fd287d\") " Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.456722 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf29d2a6-46ff-45c9-8da3-12d043fd287d-kube-api-access-rj5vx" (OuterVolumeSpecName: "kube-api-access-rj5vx") pod "bf29d2a6-46ff-45c9-8da3-12d043fd287d" (UID: "bf29d2a6-46ff-45c9-8da3-12d043fd287d"). InnerVolumeSpecName "kube-api-access-rj5vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.457034 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-scripts" (OuterVolumeSpecName: "scripts") pod "bf29d2a6-46ff-45c9-8da3-12d043fd287d" (UID: "bf29d2a6-46ff-45c9-8da3-12d043fd287d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.497724 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-config-data" (OuterVolumeSpecName: "config-data") pod "bf29d2a6-46ff-45c9-8da3-12d043fd287d" (UID: "bf29d2a6-46ff-45c9-8da3-12d043fd287d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.499556 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf29d2a6-46ff-45c9-8da3-12d043fd287d" (UID: "bf29d2a6-46ff-45c9-8da3-12d043fd287d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.552625 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj5vx\" (UniqueName: \"kubernetes.io/projected/bf29d2a6-46ff-45c9-8da3-12d043fd287d-kube-api-access-rj5vx\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.552669 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.552687 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.552702 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf29d2a6-46ff-45c9-8da3-12d043fd287d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.964108 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" event={"ID":"bf29d2a6-46ff-45c9-8da3-12d043fd287d","Type":"ContainerDied","Data":"ea9df2305bfafddfcd020bed4e1ec19ecb40fac650447aaaaf2b891b085e65b4"} Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.964463 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea9df2305bfafddfcd020bed4e1ec19ecb40fac650447aaaaf2b891b085e65b4" Nov 24 21:58:02 crc kubenswrapper[4767]: I1124 21:58:02.964200 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qb7jt" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.074882 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:58:03 crc kubenswrapper[4767]: E1124 21:58:03.075298 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf29d2a6-46ff-45c9-8da3-12d043fd287d" containerName="nova-cell0-conductor-db-sync" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.075314 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf29d2a6-46ff-45c9-8da3-12d043fd287d" containerName="nova-cell0-conductor-db-sync" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.075503 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf29d2a6-46ff-45c9-8da3-12d043fd287d" containerName="nova-cell0-conductor-db-sync" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.076110 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.077870 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zmdj6" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.080893 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.091792 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.266211 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04008f61-32ce-4326-b12d-056878a5479f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.266306 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpn64\" (UniqueName: \"kubernetes.io/projected/04008f61-32ce-4326-b12d-056878a5479f-kube-api-access-xpn64\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.266357 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04008f61-32ce-4326-b12d-056878a5479f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.368103 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpn64\" (UniqueName: \"kubernetes.io/projected/04008f61-32ce-4326-b12d-056878a5479f-kube-api-access-xpn64\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.368178 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04008f61-32ce-4326-b12d-056878a5479f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.368339 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04008f61-32ce-4326-b12d-056878a5479f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.373892 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04008f61-32ce-4326-b12d-056878a5479f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.374229 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04008f61-32ce-4326-b12d-056878a5479f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.388391 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpn64\" (UniqueName: \"kubernetes.io/projected/04008f61-32ce-4326-b12d-056878a5479f-kube-api-access-xpn64\") pod \"nova-cell0-conductor-0\" (UID: \"04008f61-32ce-4326-b12d-056878a5479f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.395810 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.822457 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:58:03 crc kubenswrapper[4767]: I1124 21:58:03.976661 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04008f61-32ce-4326-b12d-056878a5479f","Type":"ContainerStarted","Data":"413759ff5b9905b211b5e98113dc14f7268586d50b80ba1ff3e96857bfa2478d"} Nov 24 21:58:04 crc kubenswrapper[4767]: I1124 21:58:04.991489 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04008f61-32ce-4326-b12d-056878a5479f","Type":"ContainerStarted","Data":"d150442bdd7a6d405e73f8d6b6890a6e52c6297dd084524889b67eb202c18d30"} Nov 24 21:58:04 crc kubenswrapper[4767]: I1124 21:58:04.991977 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:05 crc kubenswrapper[4767]: I1124 21:58:05.013259 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.013234046 podStartE2EDuration="2.013234046s" podCreationTimestamp="2025-11-24 21:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:05.009661255 +0000 UTC m=+1167.926644637" watchObservedRunningTime="2025-11-24 21:58:05.013234046 +0000 UTC m=+1167.930217428" Nov 24 21:58:07 crc kubenswrapper[4767]: E1124 21:58:07.778196 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e2dc5b_82ce_4ce5_8fb5_b4e52232140f.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:58:09 crc kubenswrapper[4767]: I1124 21:58:09.445355 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:58:12 crc kubenswrapper[4767]: I1124 21:58:12.960117 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:58:12 crc kubenswrapper[4767]: I1124 21:58:12.960720 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="dc8b7b67-1318-4978-880f-125741025c39" containerName="kube-state-metrics" containerID="cri-o://f3a923c7df30694cc9f1da10c16f928e6ac1a2314ee06df0d1c664cbfe67b2d9" gracePeriod=30 Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.088457 4767 generic.go:334] "Generic (PLEG): container finished" podID="dc8b7b67-1318-4978-880f-125741025c39" containerID="f3a923c7df30694cc9f1da10c16f928e6ac1a2314ee06df0d1c664cbfe67b2d9" exitCode=2 Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.088487 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc8b7b67-1318-4978-880f-125741025c39","Type":"ContainerDied","Data":"f3a923c7df30694cc9f1da10c16f928e6ac1a2314ee06df0d1c664cbfe67b2d9"} Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.437310 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.449984 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.585830 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdk27\" (UniqueName: \"kubernetes.io/projected/dc8b7b67-1318-4978-880f-125741025c39-kube-api-access-gdk27\") pod \"dc8b7b67-1318-4978-880f-125741025c39\" (UID: \"dc8b7b67-1318-4978-880f-125741025c39\") " Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.592426 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8b7b67-1318-4978-880f-125741025c39-kube-api-access-gdk27" (OuterVolumeSpecName: "kube-api-access-gdk27") pod "dc8b7b67-1318-4978-880f-125741025c39" (UID: "dc8b7b67-1318-4978-880f-125741025c39"). InnerVolumeSpecName "kube-api-access-gdk27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.689027 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdk27\" (UniqueName: \"kubernetes.io/projected/dc8b7b67-1318-4978-880f-125741025c39-kube-api-access-gdk27\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.937208 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7pqm2"] Nov 24 21:58:13 crc kubenswrapper[4767]: E1124 21:58:13.937663 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8b7b67-1318-4978-880f-125741025c39" containerName="kube-state-metrics" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.937686 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8b7b67-1318-4978-880f-125741025c39" containerName="kube-state-metrics" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.937919 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8b7b67-1318-4978-880f-125741025c39" containerName="kube-state-metrics" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.938693 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.941476 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.947846 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 21:58:13 crc kubenswrapper[4767]: I1124 21:58:13.950820 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7pqm2"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.097478 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-scripts\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.097552 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hknd\" (UniqueName: \"kubernetes.io/projected/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-kube-api-access-5hknd\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.097598 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-config-data\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.097613 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.102162 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc8b7b67-1318-4978-880f-125741025c39","Type":"ContainerDied","Data":"f8542530ff562b5bf38676154725639a47832fa1f5e859906f3a4883b3066895"} Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.102207 4767 scope.go:117] "RemoveContainer" containerID="f3a923c7df30694cc9f1da10c16f928e6ac1a2314ee06df0d1c664cbfe67b2d9" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.102336 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.139986 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.141499 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.154647 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214389 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-logs\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214572 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214658 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-scripts\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214757 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrlg\" (UniqueName: \"kubernetes.io/projected/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-kube-api-access-dwrlg\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214803 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hknd\" (UniqueName: \"kubernetes.io/projected/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-kube-api-access-5hknd\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214871 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-config-data\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214913 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-config-data\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.214943 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.229136 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.248930 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.249345 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-scripts\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.250347 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-config-data\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.252430 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.271193 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.305428 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hknd\" (UniqueName: \"kubernetes.io/projected/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-kube-api-access-5hknd\") pod \"nova-cell0-cell-mapping-7pqm2\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.319302 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.319462 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrlg\" (UniqueName: \"kubernetes.io/projected/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-kube-api-access-dwrlg\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.319520 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-config-data\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.319614 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-logs\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.343590 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-logs\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.358688 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-config-data\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.370412 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.382421 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.382459 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.382476 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.390850 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.408999 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.410772 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.413756 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.413951 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.426977 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrlg\" (UniqueName: \"kubernetes.io/projected/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-kube-api-access-dwrlg\") pod \"nova-metadata-0\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.443350 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.444830 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.452331 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-logs\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.452431 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-config-data\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.452474 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6hxp\" (UniqueName: \"kubernetes.io/projected/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-kube-api-access-m6hxp\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.452545 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.453728 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.454260 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.460824 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.469250 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.470445 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.475197 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.475783 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.490694 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.494194 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dh6cv"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.496382 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.503654 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dh6cv"] Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.554789 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-config-data\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.554849 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.554880 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-config-data\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.554915 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6hxp\" (UniqueName: \"kubernetes.io/projected/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-kube-api-access-m6hxp\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.554938 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.554984 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.555009 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.555043 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.555058 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xzbj\" (UniqueName: \"kubernetes.io/projected/598505e6-8585-4537-b00e-416bd717d2ce-kube-api-access-5xzbj\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.555100 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5kv\" (UniqueName: \"kubernetes.io/projected/23380850-3126-4e93-b869-0da00c51d57c-kube-api-access-2h5kv\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.555126 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-logs\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.555845 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-logs\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.558487 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.561306 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.561642 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-config-data\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.573509 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6hxp\" (UniqueName: \"kubernetes.io/projected/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-kube-api-access-m6hxp\") pod \"nova-api-0\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657332 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657675 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xzbj\" (UniqueName: \"kubernetes.io/projected/598505e6-8585-4537-b00e-416bd717d2ce-kube-api-access-5xzbj\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657745 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5kv\" (UniqueName: \"kubernetes.io/projected/23380850-3126-4e93-b869-0da00c51d57c-kube-api-access-2h5kv\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657804 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657838 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657884 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657907 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-config\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657931 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657963 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.657994 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfghz\" (UniqueName: \"kubernetes.io/projected/644effd9-c94f-46e2-8b1b-5077f66d023e-kube-api-access-hfghz\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.658045 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-config-data\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.658070 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.658123 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.658156 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2wl\" (UniqueName: \"kubernetes.io/projected/93f39202-b69a-4038-b366-58612af46372-kube-api-access-ww2wl\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.658182 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.658243 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.663990 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.668800 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.668995 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-config-data\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.669083 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23380850-3126-4e93-b869-0da00c51d57c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.670141 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.675887 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xzbj\" (UniqueName: \"kubernetes.io/projected/598505e6-8585-4537-b00e-416bd717d2ce-kube-api-access-5xzbj\") pod \"nova-scheduler-0\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.677810 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5kv\" (UniqueName: \"kubernetes.io/projected/23380850-3126-4e93-b869-0da00c51d57c-kube-api-access-2h5kv\") pod \"kube-state-metrics-0\" (UID: \"23380850-3126-4e93-b869-0da00c51d57c\") " pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763512 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763558 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763590 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763610 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-config\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763630 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763651 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763669 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfghz\" (UniqueName: \"kubernetes.io/projected/644effd9-c94f-46e2-8b1b-5077f66d023e-kube-api-access-hfghz\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763713 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww2wl\" (UniqueName: \"kubernetes.io/projected/93f39202-b69a-4038-b366-58612af46372-kube-api-access-ww2wl\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.763731 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.764690 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.765486 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.765530 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.765941 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.766237 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.766928 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.767340 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-config\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.768027 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.782471 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.784516 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww2wl\" (UniqueName: \"kubernetes.io/projected/93f39202-b69a-4038-b366-58612af46372-kube-api-access-ww2wl\") pod \"dnsmasq-dns-757b4f8459-dh6cv\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.784991 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfghz\" (UniqueName: \"kubernetes.io/projected/644effd9-c94f-46e2-8b1b-5077f66d023e-kube-api-access-hfghz\") pod \"nova-cell1-novncproxy-0\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.798598 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.811563 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:14 crc kubenswrapper[4767]: I1124 21:58:14.827254 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.112761 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.195436 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7pqm2"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.336476 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5m85d"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.340741 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.341323 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5m85d"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.343668 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.343842 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.462016 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.484531 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.484828 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9f8\" (UniqueName: \"kubernetes.io/projected/04caedcb-53f5-42d5-9161-850f38541c06-kube-api-access-pk9f8\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.508376 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-scripts\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.508639 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-config-data\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.615593 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-config-data\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.615924 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.615968 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9f8\" (UniqueName: \"kubernetes.io/projected/04caedcb-53f5-42d5-9161-850f38541c06-kube-api-access-pk9f8\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.616025 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-scripts\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.624708 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.634140 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-scripts\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.635053 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-config-data\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.635238 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.637065 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9f8\" (UniqueName: \"kubernetes.io/projected/04caedcb-53f5-42d5-9161-850f38541c06-kube-api-access-pk9f8\") pod \"nova-cell1-conductor-db-sync-5m85d\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.683986 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.720529 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.809217 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:15 crc kubenswrapper[4767]: W1124 21:58:15.828648 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644effd9_c94f_46e2_8b1b_5077f66d023e.slice/crio-fa26ef0cef1a49989379009b9a7f6e70937ade46b6bb511adb0741d4770924fb WatchSource:0}: Error finding container fa26ef0cef1a49989379009b9a7f6e70937ade46b6bb511adb0741d4770924fb: Status 404 returned error can't find the container with id fa26ef0cef1a49989379009b9a7f6e70937ade46b6bb511adb0741d4770924fb Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.878017 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dh6cv"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.970320 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.970621 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-central-agent" containerID="cri-o://25cbae73cb7419e2345f3343176400fe9aebf605477266ddf29feeedb189e627" gracePeriod=30 Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.970690 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="proxy-httpd" containerID="cri-o://b0a892ea33e6c3fcc743f10cb988ff1fb223eafedfea569778a179cf697c2086" gracePeriod=30 Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.970716 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="sg-core" containerID="cri-o://d892439281477df6b9865e04f396e0c182f4585b97a95c6ecd7b91d6e2c059bc" gracePeriod=30 Nov 24 21:58:15 crc kubenswrapper[4767]: I1124 21:58:15.970915 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-notification-agent" containerID="cri-o://f2321f8e86d65406769e2fffcd76004fd79954738abb97d013d5bb230471e8ca" gracePeriod=30 Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.184863 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5m85d"] Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.201261 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f","Type":"ContainerStarted","Data":"e289ec9a75e4f034e1148c3388ce824c4e67576c61799a50252f1f9a26768c20"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.208242 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74","Type":"ContainerStarted","Data":"282923dc4d05c3702df71dcfd75385bbac49f63487b27bbfc99553e490c6e418"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.212399 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"644effd9-c94f-46e2-8b1b-5077f66d023e","Type":"ContainerStarted","Data":"fa26ef0cef1a49989379009b9a7f6e70937ade46b6bb511adb0741d4770924fb"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.225667 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"23380850-3126-4e93-b869-0da00c51d57c","Type":"ContainerStarted","Data":"7f1dad53779d3133cd9c5cb962a9a53a6dd585f9ddfc5c55a01879913ac30b47"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.237561 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" event={"ID":"93f39202-b69a-4038-b366-58612af46372","Type":"ContainerStarted","Data":"da7627da2c7d7efedb593b34cef9ef396347663f50c0b270e5eb7a2c70bbcd72"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.252046 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"598505e6-8585-4537-b00e-416bd717d2ce","Type":"ContainerStarted","Data":"11222e29b49252ff99d0635877e7c1e4c87f2d266f8e53d2345d8ba6eb75f465"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.272625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7pqm2" event={"ID":"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0","Type":"ContainerStarted","Data":"e01c9f961ec22e08c4c0d7fbc846695049ed620091c6fd003e6faca82305f6fe"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.272686 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7pqm2" event={"ID":"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0","Type":"ContainerStarted","Data":"9fd8df6de9e06c607f62d832c664ea1b23b81430f30651731314ae9ebabe0b60"} Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.309777 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7pqm2" podStartSLOduration=3.309746373 podStartE2EDuration="3.309746373s" podCreationTimestamp="2025-11-24 21:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:16.307139329 +0000 UTC m=+1179.224122701" watchObservedRunningTime="2025-11-24 21:58:16.309746373 +0000 UTC m=+1179.226729745" Nov 24 21:58:16 crc kubenswrapper[4767]: I1124 21:58:16.335608 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8b7b67-1318-4978-880f-125741025c39" path="/var/lib/kubelet/pods/dc8b7b67-1318-4978-880f-125741025c39/volumes" Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.293239 4767 generic.go:334] "Generic (PLEG): container finished" podID="c82b4116-dd73-4647-980f-e388c7a60f59" containerID="b0a892ea33e6c3fcc743f10cb988ff1fb223eafedfea569778a179cf697c2086" exitCode=0 Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.294339 4767 generic.go:334] "Generic (PLEG): container finished" podID="c82b4116-dd73-4647-980f-e388c7a60f59" containerID="d892439281477df6b9865e04f396e0c182f4585b97a95c6ecd7b91d6e2c059bc" exitCode=2 Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.294354 4767 generic.go:334] "Generic (PLEG): container finished" podID="c82b4116-dd73-4647-980f-e388c7a60f59" containerID="f2321f8e86d65406769e2fffcd76004fd79954738abb97d013d5bb230471e8ca" exitCode=0 Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.294363 4767 generic.go:334] "Generic (PLEG): container finished" podID="c82b4116-dd73-4647-980f-e388c7a60f59" containerID="25cbae73cb7419e2345f3343176400fe9aebf605477266ddf29feeedb189e627" exitCode=0 Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.294215 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerDied","Data":"b0a892ea33e6c3fcc743f10cb988ff1fb223eafedfea569778a179cf697c2086"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.294432 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerDied","Data":"d892439281477df6b9865e04f396e0c182f4585b97a95c6ecd7b91d6e2c059bc"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.294446 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerDied","Data":"f2321f8e86d65406769e2fffcd76004fd79954738abb97d013d5bb230471e8ca"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.294459 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerDied","Data":"25cbae73cb7419e2345f3343176400fe9aebf605477266ddf29feeedb189e627"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.320602 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5m85d" event={"ID":"04caedcb-53f5-42d5-9161-850f38541c06","Type":"ContainerStarted","Data":"4d681b6ee97b4e2ec2c7c2a6f9c1d4f4b136be0ada0a441a05165ba674b226c5"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.320676 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5m85d" event={"ID":"04caedcb-53f5-42d5-9161-850f38541c06","Type":"ContainerStarted","Data":"36a985d96387cd11353314cae256926a6ebcca7fc6cf9ccf28fd9871699aed21"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.324883 4767 generic.go:334] "Generic (PLEG): container finished" podID="93f39202-b69a-4038-b366-58612af46372" containerID="e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6" exitCode=0 Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.324950 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" event={"ID":"93f39202-b69a-4038-b366-58612af46372","Type":"ContainerDied","Data":"e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.330536 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"23380850-3126-4e93-b869-0da00c51d57c","Type":"ContainerStarted","Data":"c8c8c16b909edec0ed7a951afbc4d81a90a0405d3a8c60d2951dbfe23d41192b"} Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.330570 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.341149 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5m85d" podStartSLOduration=2.341127027 podStartE2EDuration="2.341127027s" podCreationTimestamp="2025-11-24 21:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:17.333574623 +0000 UTC m=+1180.250557995" watchObservedRunningTime="2025-11-24 21:58:17.341127027 +0000 UTC m=+1180.258110399" Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.356334 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.764275277 podStartE2EDuration="3.356318927s" podCreationTimestamp="2025-11-24 21:58:14 +0000 UTC" firstStartedPulling="2025-11-24 21:58:15.654631186 +0000 UTC m=+1178.571614558" lastFinishedPulling="2025-11-24 21:58:16.246674836 +0000 UTC m=+1179.163658208" observedRunningTime="2025-11-24 21:58:17.354976479 +0000 UTC m=+1180.271959871" watchObservedRunningTime="2025-11-24 21:58:17.356318927 +0000 UTC m=+1180.273302289" Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.583664 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:17 crc kubenswrapper[4767]: I1124 21:58:17.607668 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:18 crc kubenswrapper[4767]: E1124 21:58:18.079152 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e2dc5b_82ce_4ce5_8fb5_b4e52232140f.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.192913 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.326976 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-scripts\") pod \"c82b4116-dd73-4647-980f-e388c7a60f59\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.327068 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-run-httpd\") pod \"c82b4116-dd73-4647-980f-e388c7a60f59\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.327139 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-config-data\") pod \"c82b4116-dd73-4647-980f-e388c7a60f59\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.327196 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-combined-ca-bundle\") pod \"c82b4116-dd73-4647-980f-e388c7a60f59\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.327239 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58trz\" (UniqueName: \"kubernetes.io/projected/c82b4116-dd73-4647-980f-e388c7a60f59-kube-api-access-58trz\") pod \"c82b4116-dd73-4647-980f-e388c7a60f59\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.327327 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-sg-core-conf-yaml\") pod \"c82b4116-dd73-4647-980f-e388c7a60f59\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.327353 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-log-httpd\") pod \"c82b4116-dd73-4647-980f-e388c7a60f59\" (UID: \"c82b4116-dd73-4647-980f-e388c7a60f59\") " Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.329336 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c82b4116-dd73-4647-980f-e388c7a60f59" (UID: "c82b4116-dd73-4647-980f-e388c7a60f59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.330837 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c82b4116-dd73-4647-980f-e388c7a60f59" (UID: "c82b4116-dd73-4647-980f-e388c7a60f59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.335008 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-scripts" (OuterVolumeSpecName: "scripts") pod "c82b4116-dd73-4647-980f-e388c7a60f59" (UID: "c82b4116-dd73-4647-980f-e388c7a60f59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.359420 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82b4116-dd73-4647-980f-e388c7a60f59-kube-api-access-58trz" (OuterVolumeSpecName: "kube-api-access-58trz") pod "c82b4116-dd73-4647-980f-e388c7a60f59" (UID: "c82b4116-dd73-4647-980f-e388c7a60f59"). InnerVolumeSpecName "kube-api-access-58trz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.409432 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.429349 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.429374 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.429384 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c82b4116-dd73-4647-980f-e388c7a60f59-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.429393 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58trz\" (UniqueName: \"kubernetes.io/projected/c82b4116-dd73-4647-980f-e388c7a60f59-kube-api-access-58trz\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.505100 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c82b4116-dd73-4647-980f-e388c7a60f59" (UID: "c82b4116-dd73-4647-980f-e388c7a60f59"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.533192 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.611189 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-config-data" (OuterVolumeSpecName: "config-data") pod "c82b4116-dd73-4647-980f-e388c7a60f59" (UID: "c82b4116-dd73-4647-980f-e388c7a60f59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.619534 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c82b4116-dd73-4647-980f-e388c7a60f59","Type":"ContainerDied","Data":"630408c259401881ed7962d91cd4d7ee74eff2981e6d5ae32081897fb67d26ce"} Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.619597 4767 scope.go:117] "RemoveContainer" containerID="b0a892ea33e6c3fcc743f10cb988ff1fb223eafedfea569778a179cf697c2086" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.630831 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c82b4116-dd73-4647-980f-e388c7a60f59" (UID: "c82b4116-dd73-4647-980f-e388c7a60f59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.637494 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.637523 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82b4116-dd73-4647-980f-e388c7a60f59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.752576 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.766767 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.780659 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:18 crc kubenswrapper[4767]: E1124 21:58:18.781084 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="proxy-httpd" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781100 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="proxy-httpd" Nov 24 21:58:18 crc kubenswrapper[4767]: E1124 21:58:18.781138 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-central-agent" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781145 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-central-agent" Nov 24 21:58:18 crc kubenswrapper[4767]: E1124 21:58:18.781159 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="sg-core" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781165 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="sg-core" Nov 24 21:58:18 crc kubenswrapper[4767]: E1124 21:58:18.781179 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-notification-agent" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781186 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-notification-agent" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781404 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="proxy-httpd" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781421 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-notification-agent" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781440 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="sg-core" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.781455 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" containerName="ceilometer-central-agent" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.783358 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.786151 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.786187 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.786253 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.791619 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941723 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941756 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-config-data\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941861 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941881 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941903 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941928 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941948 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-scripts\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:18 crc kubenswrapper[4767]: I1124 21:58:18.941963 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wjnj\" (UniqueName: \"kubernetes.io/projected/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-kube-api-access-4wjnj\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044088 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044130 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044151 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044182 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044216 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-scripts\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044242 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wjnj\" (UniqueName: \"kubernetes.io/projected/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-kube-api-access-4wjnj\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044378 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044408 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-config-data\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.044984 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.045357 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.048856 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-scripts\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.049221 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.049263 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-config-data\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.050506 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.054692 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.065325 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wjnj\" (UniqueName: \"kubernetes.io/projected/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-kube-api-access-4wjnj\") pod \"ceilometer-0\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.103294 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.358333 4767 scope.go:117] "RemoveContainer" containerID="d892439281477df6b9865e04f396e0c182f4585b97a95c6ecd7b91d6e2c059bc" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.424685 4767 scope.go:117] "RemoveContainer" containerID="f2321f8e86d65406769e2fffcd76004fd79954738abb97d013d5bb230471e8ca" Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.632178 4767 scope.go:117] "RemoveContainer" containerID="25cbae73cb7419e2345f3343176400fe9aebf605477266ddf29feeedb189e627" Nov 24 21:58:19 crc kubenswrapper[4767]: W1124 21:58:19.924179 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda679f0c_cfa0_49a0_b8ca_bfe7ee7e0f2a.slice/crio-880e0c95cf33c5bfb4ee0230b1f20ca220f5fd3eb62649ce9caffafd881277b8 WatchSource:0}: Error finding container 880e0c95cf33c5bfb4ee0230b1f20ca220f5fd3eb62649ce9caffafd881277b8: Status 404 returned error can't find the container with id 880e0c95cf33c5bfb4ee0230b1f20ca220f5fd3eb62649ce9caffafd881277b8 Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.924959 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:19 crc kubenswrapper[4767]: I1124 21:58:19.927478 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.324941 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82b4116-dd73-4647-980f-e388c7a60f59" path="/var/lib/kubelet/pods/c82b4116-dd73-4647-980f-e388c7a60f59/volumes" Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.443047 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f","Type":"ContainerStarted","Data":"633ef015a6a75b60caa91dde44609ed958b624d31c001a4965f1df1fc435e86c"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.443102 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f","Type":"ContainerStarted","Data":"e376c9d935086674c61de6631e0a46a078f89c0c5db12143e76b4a78ae5e986f"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.445912 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74","Type":"ContainerStarted","Data":"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.445963 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74","Type":"ContainerStarted","Data":"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.446063 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-log" containerID="cri-o://1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68" gracePeriod=30 Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.446157 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-metadata" containerID="cri-o://d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040" gracePeriod=30 Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.449854 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"644effd9-c94f-46e2-8b1b-5077f66d023e","Type":"ContainerStarted","Data":"2c024080de5204ac3bc7215f0d73eb073f34e6b703d85c6d7c8497d7235077bc"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.450011 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="644effd9-c94f-46e2-8b1b-5077f66d023e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2c024080de5204ac3bc7215f0d73eb073f34e6b703d85c6d7c8497d7235077bc" gracePeriod=30 Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.458102 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" event={"ID":"93f39202-b69a-4038-b366-58612af46372","Type":"ContainerStarted","Data":"726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.459102 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.465177 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerStarted","Data":"880e0c95cf33c5bfb4ee0230b1f20ca220f5fd3eb62649ce9caffafd881277b8"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.467564 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"598505e6-8585-4537-b00e-416bd717d2ce","Type":"ContainerStarted","Data":"cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807"} Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.469645 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.549110142 podStartE2EDuration="6.469626745s" podCreationTimestamp="2025-11-24 21:58:14 +0000 UTC" firstStartedPulling="2025-11-24 21:58:15.488624103 +0000 UTC m=+1178.405607475" lastFinishedPulling="2025-11-24 21:58:19.409140706 +0000 UTC m=+1182.326124078" observedRunningTime="2025-11-24 21:58:20.464119939 +0000 UTC m=+1183.381103311" watchObservedRunningTime="2025-11-24 21:58:20.469626745 +0000 UTC m=+1183.386610117" Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.486726 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" podStartSLOduration=6.486707779 podStartE2EDuration="6.486707779s" podCreationTimestamp="2025-11-24 21:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:20.48354931 +0000 UTC m=+1183.400532692" watchObservedRunningTime="2025-11-24 21:58:20.486707779 +0000 UTC m=+1183.403691161" Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.515391 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.262826233 podStartE2EDuration="6.515354641s" podCreationTimestamp="2025-11-24 21:58:14 +0000 UTC" firstStartedPulling="2025-11-24 21:58:15.179404274 +0000 UTC m=+1178.096387646" lastFinishedPulling="2025-11-24 21:58:19.431932682 +0000 UTC m=+1182.348916054" observedRunningTime="2025-11-24 21:58:20.508711912 +0000 UTC m=+1183.425695294" watchObservedRunningTime="2025-11-24 21:58:20.515354641 +0000 UTC m=+1183.432338023" Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.531195 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.9820434860000002 podStartE2EDuration="6.531175219s" podCreationTimestamp="2025-11-24 21:58:14 +0000 UTC" firstStartedPulling="2025-11-24 21:58:15.836204719 +0000 UTC m=+1178.753188091" lastFinishedPulling="2025-11-24 21:58:19.385336442 +0000 UTC m=+1182.302319824" observedRunningTime="2025-11-24 21:58:20.526184457 +0000 UTC m=+1183.443167819" watchObservedRunningTime="2025-11-24 21:58:20.531175219 +0000 UTC m=+1183.448158601" Nov 24 21:58:20 crc kubenswrapper[4767]: I1124 21:58:20.547998 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.781533837 podStartE2EDuration="6.547980045s" podCreationTimestamp="2025-11-24 21:58:14 +0000 UTC" firstStartedPulling="2025-11-24 21:58:15.660694038 +0000 UTC m=+1178.577677410" lastFinishedPulling="2025-11-24 21:58:19.427140246 +0000 UTC m=+1182.344123618" observedRunningTime="2025-11-24 21:58:20.543698223 +0000 UTC m=+1183.460681595" watchObservedRunningTime="2025-11-24 21:58:20.547980045 +0000 UTC m=+1183.464963407" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.077537 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.211371 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-logs\") pod \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.211444 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwrlg\" (UniqueName: \"kubernetes.io/projected/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-kube-api-access-dwrlg\") pod \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.211483 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-config-data\") pod \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.211604 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-combined-ca-bundle\") pod \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\" (UID: \"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74\") " Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.212617 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-logs" (OuterVolumeSpecName: "logs") pod "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" (UID: "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.217621 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-kube-api-access-dwrlg" (OuterVolumeSpecName: "kube-api-access-dwrlg") pod "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" (UID: "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74"). InnerVolumeSpecName "kube-api-access-dwrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.245879 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-config-data" (OuterVolumeSpecName: "config-data") pod "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" (UID: "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.254449 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" (UID: "0c0d6092-1748-4cfa-b0b1-22ad1d19fc74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.313217 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.313253 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.313280 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwrlg\" (UniqueName: \"kubernetes.io/projected/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-kube-api-access-dwrlg\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.313290 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.483672 4767 generic.go:334] "Generic (PLEG): container finished" podID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerID="d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040" exitCode=0 Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.483703 4767 generic.go:334] "Generic (PLEG): container finished" podID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerID="1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68" exitCode=143 Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.483818 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.483877 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74","Type":"ContainerDied","Data":"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040"} Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.483933 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74","Type":"ContainerDied","Data":"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68"} Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.483949 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0d6092-1748-4cfa-b0b1-22ad1d19fc74","Type":"ContainerDied","Data":"282923dc4d05c3702df71dcfd75385bbac49f63487b27bbfc99553e490c6e418"} Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.483969 4767 scope.go:117] "RemoveContainer" containerID="d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.490170 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerStarted","Data":"85f1b3ff6b5bfe4b949a557a459161711560761cba52dbbe2edfc3132ae3bc20"} Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.533320 4767 scope.go:117] "RemoveContainer" containerID="1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.561449 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.563844 4767 scope.go:117] "RemoveContainer" containerID="d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040" Nov 24 21:58:21 crc kubenswrapper[4767]: E1124 21:58:21.564521 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040\": container with ID starting with d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040 not found: ID does not exist" containerID="d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.564581 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040"} err="failed to get container status \"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040\": rpc error: code = NotFound desc = could not find container \"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040\": container with ID starting with d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040 not found: ID does not exist" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.564616 4767 scope.go:117] "RemoveContainer" containerID="1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68" Nov 24 21:58:21 crc kubenswrapper[4767]: E1124 21:58:21.565035 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68\": container with ID starting with 1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68 not found: ID does not exist" containerID="1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.565070 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68"} err="failed to get container status \"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68\": rpc error: code = NotFound desc = could not find container \"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68\": container with ID starting with 1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68 not found: ID does not exist" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.565090 4767 scope.go:117] "RemoveContainer" containerID="d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.566010 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040"} err="failed to get container status \"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040\": rpc error: code = NotFound desc = could not find container \"d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040\": container with ID starting with d421c1aceacbdbc3b725fda4e38bf6cae8bd3d7c09aa593554670e5770fc7040 not found: ID does not exist" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.566049 4767 scope.go:117] "RemoveContainer" containerID="1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.568223 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68"} err="failed to get container status \"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68\": rpc error: code = NotFound desc = could not find container \"1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68\": container with ID starting with 1829bae7a01c949b4a9a7558371e7c446956668bf89e26fc244d24ff1b268a68 not found: ID does not exist" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.579806 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.590145 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:21 crc kubenswrapper[4767]: E1124 21:58:21.590528 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-log" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.590543 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-log" Nov 24 21:58:21 crc kubenswrapper[4767]: E1124 21:58:21.590559 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-metadata" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.590565 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-metadata" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.590749 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-log" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.590763 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" containerName="nova-metadata-metadata" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.591730 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.600188 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.600345 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.605936 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.722740 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.722814 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.722841 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9hkq\" (UniqueName: \"kubernetes.io/projected/5b659866-f25f-4013-8df4-c77fbb839461-kube-api-access-n9hkq\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.722872 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b659866-f25f-4013-8df4-c77fbb839461-logs\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.722908 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-config-data\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.824115 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b659866-f25f-4013-8df4-c77fbb839461-logs\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.824202 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-config-data\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.824381 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.824467 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.824510 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9hkq\" (UniqueName: \"kubernetes.io/projected/5b659866-f25f-4013-8df4-c77fbb839461-kube-api-access-n9hkq\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.824547 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b659866-f25f-4013-8df4-c77fbb839461-logs\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.828433 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.829047 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-config-data\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.829423 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.845439 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9hkq\" (UniqueName: \"kubernetes.io/projected/5b659866-f25f-4013-8df4-c77fbb839461-kube-api-access-n9hkq\") pod \"nova-metadata-0\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " pod="openstack/nova-metadata-0" Nov 24 21:58:21 crc kubenswrapper[4767]: I1124 21:58:21.927239 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:22 crc kubenswrapper[4767]: I1124 21:58:22.325203 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0d6092-1748-4cfa-b0b1-22ad1d19fc74" path="/var/lib/kubelet/pods/0c0d6092-1748-4cfa-b0b1-22ad1d19fc74/volumes" Nov 24 21:58:22 crc kubenswrapper[4767]: I1124 21:58:22.418849 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:22 crc kubenswrapper[4767]: I1124 21:58:22.501170 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b659866-f25f-4013-8df4-c77fbb839461","Type":"ContainerStarted","Data":"33401107f596ec2f476b82595b0b71e093103ca449518c933d5ac093e657d708"} Nov 24 21:58:22 crc kubenswrapper[4767]: I1124 21:58:22.508625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerStarted","Data":"514f28dedfd7148ec000b9c74fdb4bc81a5a6d70392a066d9d9a95b784def134"} Nov 24 21:58:22 crc kubenswrapper[4767]: I1124 21:58:22.508663 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerStarted","Data":"5c30ec5ec873ef3e7fe0fb72a1fd119c67775a25ddef3b37b9f654602defde7c"} Nov 24 21:58:23 crc kubenswrapper[4767]: I1124 21:58:23.519704 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b659866-f25f-4013-8df4-c77fbb839461","Type":"ContainerStarted","Data":"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b"} Nov 24 21:58:23 crc kubenswrapper[4767]: I1124 21:58:23.519760 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b659866-f25f-4013-8df4-c77fbb839461","Type":"ContainerStarted","Data":"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421"} Nov 24 21:58:23 crc kubenswrapper[4767]: I1124 21:58:23.554320 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.554299503 podStartE2EDuration="2.554299503s" podCreationTimestamp="2025-11-24 21:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:23.545615287 +0000 UTC m=+1186.462598659" watchObservedRunningTime="2025-11-24 21:58:23.554299503 +0000 UTC m=+1186.471282875" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.539165 4767 generic.go:334] "Generic (PLEG): container finished" podID="04caedcb-53f5-42d5-9161-850f38541c06" containerID="4d681b6ee97b4e2ec2c7c2a6f9c1d4f4b136be0ada0a441a05165ba674b226c5" exitCode=0 Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.539237 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5m85d" event={"ID":"04caedcb-53f5-42d5-9161-850f38541c06","Type":"ContainerDied","Data":"4d681b6ee97b4e2ec2c7c2a6f9c1d4f4b136be0ada0a441a05165ba674b226c5"} Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.543441 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerStarted","Data":"7ed5c3346eca8c668d0777678911a7502aea92869a6103c793ad88a43a314ced"} Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.543892 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.544807 4767 generic.go:334] "Generic (PLEG): container finished" podID="cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" containerID="e01c9f961ec22e08c4c0d7fbc846695049ed620091c6fd003e6faca82305f6fe" exitCode=0 Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.544890 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7pqm2" event={"ID":"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0","Type":"ContainerDied","Data":"e01c9f961ec22e08c4c0d7fbc846695049ed620091c6fd003e6faca82305f6fe"} Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.589174 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.987087593 podStartE2EDuration="6.589153146s" podCreationTimestamp="2025-11-24 21:58:18 +0000 UTC" firstStartedPulling="2025-11-24 21:58:19.927136249 +0000 UTC m=+1182.844119621" lastFinishedPulling="2025-11-24 21:58:23.529201802 +0000 UTC m=+1186.446185174" observedRunningTime="2025-11-24 21:58:24.582453976 +0000 UTC m=+1187.499437378" watchObservedRunningTime="2025-11-24 21:58:24.589153146 +0000 UTC m=+1187.506136518" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.766406 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.766470 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.794858 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.799298 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.799348 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.812931 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.830589 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.861582 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.905960 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d7gmk"] Nov 24 21:58:24 crc kubenswrapper[4767]: I1124 21:58:24.906178 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" podUID="0caee68e-529a-4a72-95af-fda2e98e230b" containerName="dnsmasq-dns" containerID="cri-o://9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50" gracePeriod=10 Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.410252 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.495514 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-nb\") pod \"0caee68e-529a-4a72-95af-fda2e98e230b\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.495858 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-config\") pod \"0caee68e-529a-4a72-95af-fda2e98e230b\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.495912 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7krh6\" (UniqueName: \"kubernetes.io/projected/0caee68e-529a-4a72-95af-fda2e98e230b-kube-api-access-7krh6\") pod \"0caee68e-529a-4a72-95af-fda2e98e230b\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.495967 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-swift-storage-0\") pod \"0caee68e-529a-4a72-95af-fda2e98e230b\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.496040 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-sb\") pod \"0caee68e-529a-4a72-95af-fda2e98e230b\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.496066 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-svc\") pod \"0caee68e-529a-4a72-95af-fda2e98e230b\" (UID: \"0caee68e-529a-4a72-95af-fda2e98e230b\") " Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.506091 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0caee68e-529a-4a72-95af-fda2e98e230b-kube-api-access-7krh6" (OuterVolumeSpecName: "kube-api-access-7krh6") pod "0caee68e-529a-4a72-95af-fda2e98e230b" (UID: "0caee68e-529a-4a72-95af-fda2e98e230b"). InnerVolumeSpecName "kube-api-access-7krh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.545277 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0caee68e-529a-4a72-95af-fda2e98e230b" (UID: "0caee68e-529a-4a72-95af-fda2e98e230b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.556187 4767 generic.go:334] "Generic (PLEG): container finished" podID="0caee68e-529a-4a72-95af-fda2e98e230b" containerID="9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50" exitCode=0 Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.558317 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" event={"ID":"0caee68e-529a-4a72-95af-fda2e98e230b","Type":"ContainerDied","Data":"9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50"} Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.558376 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" event={"ID":"0caee68e-529a-4a72-95af-fda2e98e230b","Type":"ContainerDied","Data":"a722988826fbfaef25f2e43e1cce4b29d7a9c26b324e6ef4b10885d3dc925bbe"} Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.558400 4767 scope.go:117] "RemoveContainer" containerID="9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.558593 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d7gmk" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.601948 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7krh6\" (UniqueName: \"kubernetes.io/projected/0caee68e-529a-4a72-95af-fda2e98e230b-kube-api-access-7krh6\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.601976 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.615293 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.634872 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-config" (OuterVolumeSpecName: "config") pod "0caee68e-529a-4a72-95af-fda2e98e230b" (UID: "0caee68e-529a-4a72-95af-fda2e98e230b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.635192 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0caee68e-529a-4a72-95af-fda2e98e230b" (UID: "0caee68e-529a-4a72-95af-fda2e98e230b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.645035 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0caee68e-529a-4a72-95af-fda2e98e230b" (UID: "0caee68e-529a-4a72-95af-fda2e98e230b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.656088 4767 scope.go:117] "RemoveContainer" containerID="c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.678824 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0caee68e-529a-4a72-95af-fda2e98e230b" (UID: "0caee68e-529a-4a72-95af-fda2e98e230b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.721042 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.721082 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.721093 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.721102 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0caee68e-529a-4a72-95af-fda2e98e230b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.734547 4767 scope.go:117] "RemoveContainer" containerID="9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50" Nov 24 21:58:25 crc kubenswrapper[4767]: E1124 21:58:25.735350 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50\": container with ID starting with 9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50 not found: ID does not exist" containerID="9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.735422 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50"} err="failed to get container status \"9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50\": rpc error: code = NotFound desc = could not find container \"9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50\": container with ID starting with 9dc216d6783602ab297839e434202f4fa2168623fed3037402e8a85a789bfc50 not found: ID does not exist" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.735477 4767 scope.go:117] "RemoveContainer" containerID="c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf" Nov 24 21:58:25 crc kubenswrapper[4767]: E1124 21:58:25.735904 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf\": container with ID starting with c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf not found: ID does not exist" containerID="c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.735931 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf"} err="failed to get container status \"c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf\": rpc error: code = NotFound desc = could not find container \"c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf\": container with ID starting with c843f6e5c6c54aa04130350d68623ec3357b9aaf6d8e48d785a52b3e804e7edf not found: ID does not exist" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.849717 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.850306 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.929645 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d7gmk"] Nov 24 21:58:25 crc kubenswrapper[4767]: I1124 21:58:25.936908 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d7gmk"] Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.161830 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.166386 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.325767 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0caee68e-529a-4a72-95af-fda2e98e230b" path="/var/lib/kubelet/pods/0caee68e-529a-4a72-95af-fda2e98e230b/volumes" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332024 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-scripts\") pod \"04caedcb-53f5-42d5-9161-850f38541c06\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332214 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-combined-ca-bundle\") pod \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332249 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-scripts\") pod \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332292 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk9f8\" (UniqueName: \"kubernetes.io/projected/04caedcb-53f5-42d5-9161-850f38541c06-kube-api-access-pk9f8\") pod \"04caedcb-53f5-42d5-9161-850f38541c06\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332391 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hknd\" (UniqueName: \"kubernetes.io/projected/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-kube-api-access-5hknd\") pod \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332460 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-config-data\") pod \"04caedcb-53f5-42d5-9161-850f38541c06\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332493 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-config-data\") pod \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\" (UID: \"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.332517 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-combined-ca-bundle\") pod \"04caedcb-53f5-42d5-9161-850f38541c06\" (UID: \"04caedcb-53f5-42d5-9161-850f38541c06\") " Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.338595 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-scripts" (OuterVolumeSpecName: "scripts") pod "04caedcb-53f5-42d5-9161-850f38541c06" (UID: "04caedcb-53f5-42d5-9161-850f38541c06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.342492 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-kube-api-access-5hknd" (OuterVolumeSpecName: "kube-api-access-5hknd") pod "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" (UID: "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0"). InnerVolumeSpecName "kube-api-access-5hknd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.355372 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-scripts" (OuterVolumeSpecName: "scripts") pod "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" (UID: "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.355485 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04caedcb-53f5-42d5-9161-850f38541c06-kube-api-access-pk9f8" (OuterVolumeSpecName: "kube-api-access-pk9f8") pod "04caedcb-53f5-42d5-9161-850f38541c06" (UID: "04caedcb-53f5-42d5-9161-850f38541c06"). InnerVolumeSpecName "kube-api-access-pk9f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.361406 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-config-data" (OuterVolumeSpecName: "config-data") pod "04caedcb-53f5-42d5-9161-850f38541c06" (UID: "04caedcb-53f5-42d5-9161-850f38541c06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.377042 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04caedcb-53f5-42d5-9161-850f38541c06" (UID: "04caedcb-53f5-42d5-9161-850f38541c06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.381627 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-config-data" (OuterVolumeSpecName: "config-data") pod "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" (UID: "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.387242 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" (UID: "cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434201 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434231 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434240 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk9f8\" (UniqueName: \"kubernetes.io/projected/04caedcb-53f5-42d5-9161-850f38541c06-kube-api-access-pk9f8\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434251 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hknd\" (UniqueName: \"kubernetes.io/projected/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-kube-api-access-5hknd\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434259 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434281 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434290 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.434299 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04caedcb-53f5-42d5-9161-850f38541c06-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.567918 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7pqm2" event={"ID":"cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0","Type":"ContainerDied","Data":"9fd8df6de9e06c607f62d832c664ea1b23b81430f30651731314ae9ebabe0b60"} Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.567958 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd8df6de9e06c607f62d832c664ea1b23b81430f30651731314ae9ebabe0b60" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.568004 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7pqm2" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.576218 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5m85d" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.576923 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5m85d" event={"ID":"04caedcb-53f5-42d5-9161-850f38541c06","Type":"ContainerDied","Data":"36a985d96387cd11353314cae256926a6ebcca7fc6cf9ccf28fd9871699aed21"} Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.576950 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a985d96387cd11353314cae256926a6ebcca7fc6cf9ccf28fd9871699aed21" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699226 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:58:26 crc kubenswrapper[4767]: E1124 21:58:26.699681 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04caedcb-53f5-42d5-9161-850f38541c06" containerName="nova-cell1-conductor-db-sync" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699700 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="04caedcb-53f5-42d5-9161-850f38541c06" containerName="nova-cell1-conductor-db-sync" Nov 24 21:58:26 crc kubenswrapper[4767]: E1124 21:58:26.699731 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" containerName="nova-manage" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699737 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" containerName="nova-manage" Nov 24 21:58:26 crc kubenswrapper[4767]: E1124 21:58:26.699751 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caee68e-529a-4a72-95af-fda2e98e230b" containerName="init" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699757 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caee68e-529a-4a72-95af-fda2e98e230b" containerName="init" Nov 24 21:58:26 crc kubenswrapper[4767]: E1124 21:58:26.699772 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caee68e-529a-4a72-95af-fda2e98e230b" containerName="dnsmasq-dns" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699778 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caee68e-529a-4a72-95af-fda2e98e230b" containerName="dnsmasq-dns" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699949 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" containerName="nova-manage" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699969 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="04caedcb-53f5-42d5-9161-850f38541c06" containerName="nova-cell1-conductor-db-sync" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.699988 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0caee68e-529a-4a72-95af-fda2e98e230b" containerName="dnsmasq-dns" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.700678 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.712037 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.715282 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.825196 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.825768 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-log" containerID="cri-o://e376c9d935086674c61de6631e0a46a078f89c0c5db12143e76b4a78ae5e986f" gracePeriod=30 Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.825797 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-api" containerID="cri-o://633ef015a6a75b60caa91dde44609ed958b624d31c001a4965f1df1fc435e86c" gracePeriod=30 Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.841180 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.842127 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpghk\" (UniqueName: \"kubernetes.io/projected/41bdf82d-f1b9-4575-a36b-32d5617b9562-kube-api-access-dpghk\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.842208 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bdf82d-f1b9-4575-a36b-32d5617b9562-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.842331 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bdf82d-f1b9-4575-a36b-32d5617b9562-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.851496 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.851816 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-log" containerID="cri-o://b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421" gracePeriod=30 Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.852077 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-metadata" containerID="cri-o://99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b" gracePeriod=30 Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.928056 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.928120 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.943895 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpghk\" (UniqueName: \"kubernetes.io/projected/41bdf82d-f1b9-4575-a36b-32d5617b9562-kube-api-access-dpghk\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.944137 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bdf82d-f1b9-4575-a36b-32d5617b9562-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.944350 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bdf82d-f1b9-4575-a36b-32d5617b9562-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.949886 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bdf82d-f1b9-4575-a36b-32d5617b9562-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.949902 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bdf82d-f1b9-4575-a36b-32d5617b9562-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:26 crc kubenswrapper[4767]: I1124 21:58:26.962762 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpghk\" (UniqueName: \"kubernetes.io/projected/41bdf82d-f1b9-4575-a36b-32d5617b9562-kube-api-access-dpghk\") pod \"nova-cell1-conductor-0\" (UID: \"41bdf82d-f1b9-4575-a36b-32d5617b9562\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.017135 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.484555 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.557039 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-config-data\") pod \"5b659866-f25f-4013-8df4-c77fbb839461\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.557394 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b659866-f25f-4013-8df4-c77fbb839461-logs\") pod \"5b659866-f25f-4013-8df4-c77fbb839461\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.557544 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-combined-ca-bundle\") pod \"5b659866-f25f-4013-8df4-c77fbb839461\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.557635 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-nova-metadata-tls-certs\") pod \"5b659866-f25f-4013-8df4-c77fbb839461\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.557717 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b659866-f25f-4013-8df4-c77fbb839461-logs" (OuterVolumeSpecName: "logs") pod "5b659866-f25f-4013-8df4-c77fbb839461" (UID: "5b659866-f25f-4013-8df4-c77fbb839461"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.557885 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9hkq\" (UniqueName: \"kubernetes.io/projected/5b659866-f25f-4013-8df4-c77fbb839461-kube-api-access-n9hkq\") pod \"5b659866-f25f-4013-8df4-c77fbb839461\" (UID: \"5b659866-f25f-4013-8df4-c77fbb839461\") " Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.558390 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b659866-f25f-4013-8df4-c77fbb839461-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.566000 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b659866-f25f-4013-8df4-c77fbb839461-kube-api-access-n9hkq" (OuterVolumeSpecName: "kube-api-access-n9hkq") pod "5b659866-f25f-4013-8df4-c77fbb839461" (UID: "5b659866-f25f-4013-8df4-c77fbb839461"). InnerVolumeSpecName "kube-api-access-n9hkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.570207 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.592390 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-config-data" (OuterVolumeSpecName: "config-data") pod "5b659866-f25f-4013-8df4-c77fbb839461" (UID: "5b659866-f25f-4013-8df4-c77fbb839461"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.592939 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b659866-f25f-4013-8df4-c77fbb839461" (UID: "5b659866-f25f-4013-8df4-c77fbb839461"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.596081 4767 generic.go:334] "Generic (PLEG): container finished" podID="5b659866-f25f-4013-8df4-c77fbb839461" containerID="99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b" exitCode=0 Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.596111 4767 generic.go:334] "Generic (PLEG): container finished" podID="5b659866-f25f-4013-8df4-c77fbb839461" containerID="b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421" exitCode=143 Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.596177 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b659866-f25f-4013-8df4-c77fbb839461","Type":"ContainerDied","Data":"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b"} Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.596204 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b659866-f25f-4013-8df4-c77fbb839461","Type":"ContainerDied","Data":"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421"} Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.596215 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b659866-f25f-4013-8df4-c77fbb839461","Type":"ContainerDied","Data":"33401107f596ec2f476b82595b0b71e093103ca449518c933d5ac093e657d708"} Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.596229 4767 scope.go:117] "RemoveContainer" containerID="99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.596352 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.601724 4767 generic.go:334] "Generic (PLEG): container finished" podID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerID="e376c9d935086674c61de6631e0a46a078f89c0c5db12143e76b4a78ae5e986f" exitCode=143 Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.601802 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f","Type":"ContainerDied","Data":"e376c9d935086674c61de6631e0a46a078f89c0c5db12143e76b4a78ae5e986f"} Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.601892 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="598505e6-8585-4537-b00e-416bd717d2ce" containerName="nova-scheduler-scheduler" containerID="cri-o://cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807" gracePeriod=30 Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.613283 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5b659866-f25f-4013-8df4-c77fbb839461" (UID: "5b659866-f25f-4013-8df4-c77fbb839461"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.638757 4767 scope.go:117] "RemoveContainer" containerID="b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.660601 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.660634 4767 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.660644 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9hkq\" (UniqueName: \"kubernetes.io/projected/5b659866-f25f-4013-8df4-c77fbb839461-kube-api-access-n9hkq\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.660654 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b659866-f25f-4013-8df4-c77fbb839461-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.671331 4767 scope.go:117] "RemoveContainer" containerID="99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b" Nov 24 21:58:27 crc kubenswrapper[4767]: E1124 21:58:27.671900 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b\": container with ID starting with 99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b not found: ID does not exist" containerID="99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.671932 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b"} err="failed to get container status \"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b\": rpc error: code = NotFound desc = could not find container \"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b\": container with ID starting with 99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b not found: ID does not exist" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.671953 4767 scope.go:117] "RemoveContainer" containerID="b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421" Nov 24 21:58:27 crc kubenswrapper[4767]: E1124 21:58:27.672350 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421\": container with ID starting with b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421 not found: ID does not exist" containerID="b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.672382 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421"} err="failed to get container status \"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421\": rpc error: code = NotFound desc = could not find container \"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421\": container with ID starting with b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421 not found: ID does not exist" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.672399 4767 scope.go:117] "RemoveContainer" containerID="99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.673638 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b"} err="failed to get container status \"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b\": rpc error: code = NotFound desc = could not find container \"99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b\": container with ID starting with 99cc0a59d38a8bb3ff80b7362ac053935fcbe80e5d3e87ca7a13cf945fd4d48b not found: ID does not exist" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.673659 4767 scope.go:117] "RemoveContainer" containerID="b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.673902 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421"} err="failed to get container status \"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421\": rpc error: code = NotFound desc = could not find container \"b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421\": container with ID starting with b67d9b749f6f2e81f08d72d26b077f0147784049cebe3666ce24b24e943f3421 not found: ID does not exist" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.933165 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.943292 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.953693 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:27 crc kubenswrapper[4767]: E1124 21:58:27.954143 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-metadata" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.954169 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-metadata" Nov 24 21:58:27 crc kubenswrapper[4767]: E1124 21:58:27.954186 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-log" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.954194 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-log" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.954479 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-log" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.954502 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b659866-f25f-4013-8df4-c77fbb839461" containerName="nova-metadata-metadata" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.955667 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.962887 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.963103 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:58:27 crc kubenswrapper[4767]: I1124 21:58:27.971652 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.069307 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.069406 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjh9v\" (UniqueName: \"kubernetes.io/projected/f22ba9ab-6fb1-42bf-afe8-80090a611d52-kube-api-access-gjh9v\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.069471 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22ba9ab-6fb1-42bf-afe8-80090a611d52-logs\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.069503 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.069521 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-config-data\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.171018 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.171151 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjh9v\" (UniqueName: \"kubernetes.io/projected/f22ba9ab-6fb1-42bf-afe8-80090a611d52-kube-api-access-gjh9v\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.171243 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22ba9ab-6fb1-42bf-afe8-80090a611d52-logs\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.171317 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.171345 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-config-data\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.171979 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22ba9ab-6fb1-42bf-afe8-80090a611d52-logs\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.175968 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-config-data\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.177215 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.177454 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.189009 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjh9v\" (UniqueName: \"kubernetes.io/projected/f22ba9ab-6fb1-42bf-afe8-80090a611d52-kube-api-access-gjh9v\") pod \"nova-metadata-0\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.301062 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:58:28 crc kubenswrapper[4767]: E1124 21:58:28.334467 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e2dc5b_82ce_4ce5_8fb5_b4e52232140f.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.337056 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b659866-f25f-4013-8df4-c77fbb839461" path="/var/lib/kubelet/pods/5b659866-f25f-4013-8df4-c77fbb839461/volumes" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.613480 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"41bdf82d-f1b9-4575-a36b-32d5617b9562","Type":"ContainerStarted","Data":"0c264ee7b7f1de57104a3c92c778a5b8aa6a491f95ce937a2a9bfc3057da241c"} Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.613997 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.614017 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"41bdf82d-f1b9-4575-a36b-32d5617b9562","Type":"ContainerStarted","Data":"8c3642fb56d2e9eff6df10917a8b8e458171b89d1c9b356c5ac786675a12c37a"} Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.629485 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.629468802 podStartE2EDuration="2.629468802s" podCreationTimestamp="2025-11-24 21:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:28.62867642 +0000 UTC m=+1191.545659802" watchObservedRunningTime="2025-11-24 21:58:28.629468802 +0000 UTC m=+1191.546452174" Nov 24 21:58:28 crc kubenswrapper[4767]: I1124 21:58:28.748244 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:58:28 crc kubenswrapper[4767]: W1124 21:58:28.748791 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf22ba9ab_6fb1_42bf_afe8_80090a611d52.slice/crio-c4cd020881e4855e91795dcc66f0ae06d2f78158f1f63aef0c8a0a08ad9b4cd0 WatchSource:0}: Error finding container c4cd020881e4855e91795dcc66f0ae06d2f78158f1f63aef0c8a0a08ad9b4cd0: Status 404 returned error can't find the container with id c4cd020881e4855e91795dcc66f0ae06d2f78158f1f63aef0c8a0a08ad9b4cd0 Nov 24 21:58:29 crc kubenswrapper[4767]: I1124 21:58:29.624824 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f22ba9ab-6fb1-42bf-afe8-80090a611d52","Type":"ContainerStarted","Data":"ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a"} Nov 24 21:58:29 crc kubenswrapper[4767]: I1124 21:58:29.625153 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f22ba9ab-6fb1-42bf-afe8-80090a611d52","Type":"ContainerStarted","Data":"b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345"} Nov 24 21:58:29 crc kubenswrapper[4767]: I1124 21:58:29.625166 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f22ba9ab-6fb1-42bf-afe8-80090a611d52","Type":"ContainerStarted","Data":"c4cd020881e4855e91795dcc66f0ae06d2f78158f1f63aef0c8a0a08ad9b4cd0"} Nov 24 21:58:29 crc kubenswrapper[4767]: I1124 21:58:29.647705 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.647685004 podStartE2EDuration="2.647685004s" podCreationTimestamp="2025-11-24 21:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:29.645499973 +0000 UTC m=+1192.562483345" watchObservedRunningTime="2025-11-24 21:58:29.647685004 +0000 UTC m=+1192.564668376" Nov 24 21:58:29 crc kubenswrapper[4767]: E1124 21:58:29.800694 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:58:29 crc kubenswrapper[4767]: E1124 21:58:29.802203 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:58:29 crc kubenswrapper[4767]: E1124 21:58:29.804007 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:58:29 crc kubenswrapper[4767]: E1124 21:58:29.804051 4767 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="598505e6-8585-4537-b00e-416bd717d2ce" containerName="nova-scheduler-scheduler" Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.651032 4767 generic.go:334] "Generic (PLEG): container finished" podID="598505e6-8585-4537-b00e-416bd717d2ce" containerID="cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807" exitCode=0 Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.651087 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"598505e6-8585-4537-b00e-416bd717d2ce","Type":"ContainerDied","Data":"cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807"} Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.802729 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.946995 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-combined-ca-bundle\") pod \"598505e6-8585-4537-b00e-416bd717d2ce\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.947470 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xzbj\" (UniqueName: \"kubernetes.io/projected/598505e6-8585-4537-b00e-416bd717d2ce-kube-api-access-5xzbj\") pod \"598505e6-8585-4537-b00e-416bd717d2ce\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.947633 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-config-data\") pod \"598505e6-8585-4537-b00e-416bd717d2ce\" (UID: \"598505e6-8585-4537-b00e-416bd717d2ce\") " Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.952699 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598505e6-8585-4537-b00e-416bd717d2ce-kube-api-access-5xzbj" (OuterVolumeSpecName: "kube-api-access-5xzbj") pod "598505e6-8585-4537-b00e-416bd717d2ce" (UID: "598505e6-8585-4537-b00e-416bd717d2ce"). InnerVolumeSpecName "kube-api-access-5xzbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.977673 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "598505e6-8585-4537-b00e-416bd717d2ce" (UID: "598505e6-8585-4537-b00e-416bd717d2ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:31 crc kubenswrapper[4767]: I1124 21:58:31.980894 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-config-data" (OuterVolumeSpecName: "config-data") pod "598505e6-8585-4537-b00e-416bd717d2ce" (UID: "598505e6-8585-4537-b00e-416bd717d2ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.050465 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.050494 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xzbj\" (UniqueName: \"kubernetes.io/projected/598505e6-8585-4537-b00e-416bd717d2ce-kube-api-access-5xzbj\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.050521 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598505e6-8585-4537-b00e-416bd717d2ce-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.051559 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.663510 4767 generic.go:334] "Generic (PLEG): container finished" podID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerID="633ef015a6a75b60caa91dde44609ed958b624d31c001a4965f1df1fc435e86c" exitCode=0 Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.663593 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f","Type":"ContainerDied","Data":"633ef015a6a75b60caa91dde44609ed958b624d31c001a4965f1df1fc435e86c"} Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.663831 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f","Type":"ContainerDied","Data":"e289ec9a75e4f034e1148c3388ce824c4e67576c61799a50252f1f9a26768c20"} Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.663845 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e289ec9a75e4f034e1148c3388ce824c4e67576c61799a50252f1f9a26768c20" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.666204 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"598505e6-8585-4537-b00e-416bd717d2ce","Type":"ContainerDied","Data":"11222e29b49252ff99d0635877e7c1e4c87f2d266f8e53d2345d8ba6eb75f465"} Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.666279 4767 scope.go:117] "RemoveContainer" containerID="cc2cdca1b5b95cc33f9cdba6d6806c629f59e327f9d1e8b32134a9eff438f807" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.666262 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.753445 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.770027 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.784305 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.797037 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:32 crc kubenswrapper[4767]: E1124 21:58:32.797688 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-log" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.797708 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-log" Nov 24 21:58:32 crc kubenswrapper[4767]: E1124 21:58:32.797733 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-api" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.797739 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-api" Nov 24 21:58:32 crc kubenswrapper[4767]: E1124 21:58:32.797750 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598505e6-8585-4537-b00e-416bd717d2ce" containerName="nova-scheduler-scheduler" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.797757 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="598505e6-8585-4537-b00e-416bd717d2ce" containerName="nova-scheduler-scheduler" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.797973 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-log" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.797993 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="598505e6-8585-4537-b00e-416bd717d2ce" containerName="nova-scheduler-scheduler" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.798000 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" containerName="nova-api-api" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.798774 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.806666 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.815065 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.866970 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-combined-ca-bundle\") pod \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.867068 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-logs\") pod \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.867141 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-config-data\") pod \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.867229 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6hxp\" (UniqueName: \"kubernetes.io/projected/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-kube-api-access-m6hxp\") pod \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\" (UID: \"3ecbeaef-bd40-4fff-a6c3-abffe5cb368f\") " Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.868057 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-logs" (OuterVolumeSpecName: "logs") pod "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" (UID: "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.874419 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-kube-api-access-m6hxp" (OuterVolumeSpecName: "kube-api-access-m6hxp") pod "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" (UID: "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f"). InnerVolumeSpecName "kube-api-access-m6hxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.894664 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" (UID: "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.903938 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-config-data" (OuterVolumeSpecName: "config-data") pod "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" (UID: "3ecbeaef-bd40-4fff-a6c3-abffe5cb368f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.969026 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-config-data\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.969128 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.969151 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzp95\" (UniqueName: \"kubernetes.io/projected/21796dd9-fa59-45d8-a276-b4e35f1fcaae-kube-api-access-zzp95\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.969443 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6hxp\" (UniqueName: \"kubernetes.io/projected/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-kube-api-access-m6hxp\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.969477 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.969489 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:32 crc kubenswrapper[4767]: I1124 21:58:32.969499 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.071061 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-config-data\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.071154 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.071180 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzp95\" (UniqueName: \"kubernetes.io/projected/21796dd9-fa59-45d8-a276-b4e35f1fcaae-kube-api-access-zzp95\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.075382 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.076719 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-config-data\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.100187 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzp95\" (UniqueName: \"kubernetes.io/projected/21796dd9-fa59-45d8-a276-b4e35f1fcaae-kube-api-access-zzp95\") pod \"nova-scheduler-0\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " pod="openstack/nova-scheduler-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.115800 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.302599 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.302639 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.604526 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:58:33 crc kubenswrapper[4767]: W1124 21:58:33.605038 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21796dd9_fa59_45d8_a276_b4e35f1fcaae.slice/crio-5ec7bd1c7c5fb45515794a7e71540361d1c440cbdeac7df19a25a6a43965efdf WatchSource:0}: Error finding container 5ec7bd1c7c5fb45515794a7e71540361d1c440cbdeac7df19a25a6a43965efdf: Status 404 returned error can't find the container with id 5ec7bd1c7c5fb45515794a7e71540361d1c440cbdeac7df19a25a6a43965efdf Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.682356 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"21796dd9-fa59-45d8-a276-b4e35f1fcaae","Type":"ContainerStarted","Data":"5ec7bd1c7c5fb45515794a7e71540361d1c440cbdeac7df19a25a6a43965efdf"} Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.684656 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.761349 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.768032 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.775219 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.776756 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.779186 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.796653 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.886195 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.886440 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-config-data\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.886484 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a80b76-4615-4eda-8b74-d434aaa8932f-logs\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.886527 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4zf\" (UniqueName: \"kubernetes.io/projected/52a80b76-4615-4eda-8b74-d434aaa8932f-kube-api-access-zn4zf\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.988420 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.989662 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-config-data\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.990196 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a80b76-4615-4eda-8b74-d434aaa8932f-logs\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.990337 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4zf\" (UniqueName: \"kubernetes.io/projected/52a80b76-4615-4eda-8b74-d434aaa8932f-kube-api-access-zn4zf\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.990738 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a80b76-4615-4eda-8b74-d434aaa8932f-logs\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.999017 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:33 crc kubenswrapper[4767]: I1124 21:58:33.999842 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-config-data\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.005633 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4zf\" (UniqueName: \"kubernetes.io/projected/52a80b76-4615-4eda-8b74-d434aaa8932f-kube-api-access-zn4zf\") pod \"nova-api-0\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " pod="openstack/nova-api-0" Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.128953 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.339964 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecbeaef-bd40-4fff-a6c3-abffe5cb368f" path="/var/lib/kubelet/pods/3ecbeaef-bd40-4fff-a6c3-abffe5cb368f/volumes" Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.340915 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598505e6-8585-4537-b00e-416bd717d2ce" path="/var/lib/kubelet/pods/598505e6-8585-4537-b00e-416bd717d2ce/volumes" Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.629308 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:34 crc kubenswrapper[4767]: W1124 21:58:34.641460 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52a80b76_4615_4eda_8b74_d434aaa8932f.slice/crio-5b189babddd8a0911a9c9304dc806aa58efce3fd9d8426da056116cc7492b899 WatchSource:0}: Error finding container 5b189babddd8a0911a9c9304dc806aa58efce3fd9d8426da056116cc7492b899: Status 404 returned error can't find the container with id 5b189babddd8a0911a9c9304dc806aa58efce3fd9d8426da056116cc7492b899 Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.698073 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"21796dd9-fa59-45d8-a276-b4e35f1fcaae","Type":"ContainerStarted","Data":"8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc"} Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.700949 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52a80b76-4615-4eda-8b74-d434aaa8932f","Type":"ContainerStarted","Data":"5b189babddd8a0911a9c9304dc806aa58efce3fd9d8426da056116cc7492b899"} Nov 24 21:58:34 crc kubenswrapper[4767]: I1124 21:58:34.722901 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.722773651 podStartE2EDuration="2.722773651s" podCreationTimestamp="2025-11-24 21:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:34.714447605 +0000 UTC m=+1197.631430987" watchObservedRunningTime="2025-11-24 21:58:34.722773651 +0000 UTC m=+1197.639757023" Nov 24 21:58:35 crc kubenswrapper[4767]: I1124 21:58:35.727422 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52a80b76-4615-4eda-8b74-d434aaa8932f","Type":"ContainerStarted","Data":"9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04"} Nov 24 21:58:35 crc kubenswrapper[4767]: I1124 21:58:35.727704 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52a80b76-4615-4eda-8b74-d434aaa8932f","Type":"ContainerStarted","Data":"010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7"} Nov 24 21:58:35 crc kubenswrapper[4767]: I1124 21:58:35.754136 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.754107495 podStartE2EDuration="2.754107495s" podCreationTimestamp="2025-11-24 21:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:35.74791988 +0000 UTC m=+1198.664903262" watchObservedRunningTime="2025-11-24 21:58:35.754107495 +0000 UTC m=+1198.671090867" Nov 24 21:58:38 crc kubenswrapper[4767]: I1124 21:58:38.116335 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:58:38 crc kubenswrapper[4767]: I1124 21:58:38.303818 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:58:38 crc kubenswrapper[4767]: I1124 21:58:38.303856 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:58:38 crc kubenswrapper[4767]: E1124 21:58:38.586365 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e2dc5b_82ce_4ce5_8fb5_b4e52232140f.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:58:39 crc kubenswrapper[4767]: I1124 21:58:39.316469 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:58:39 crc kubenswrapper[4767]: I1124 21:58:39.316482 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:58:43 crc kubenswrapper[4767]: I1124 21:58:43.116929 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:58:43 crc kubenswrapper[4767]: I1124 21:58:43.317932 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:58:43 crc kubenswrapper[4767]: I1124 21:58:43.872838 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:58:44 crc kubenswrapper[4767]: I1124 21:58:44.130345 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:58:44 crc kubenswrapper[4767]: I1124 21:58:44.130439 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:58:45 crc kubenswrapper[4767]: I1124 21:58:45.212394 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:58:45 crc kubenswrapper[4767]: I1124 21:58:45.212577 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:58:48 crc kubenswrapper[4767]: I1124 21:58:48.307870 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:58:48 crc kubenswrapper[4767]: I1124 21:58:48.308486 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:58:48 crc kubenswrapper[4767]: I1124 21:58:48.337508 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:58:48 crc kubenswrapper[4767]: I1124 21:58:48.337547 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:58:49 crc kubenswrapper[4767]: I1124 21:58:49.113835 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.912812 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"644effd9-c94f-46e2-8b1b-5077f66d023e","Type":"ContainerDied","Data":"2c024080de5204ac3bc7215f0d73eb073f34e6b703d85c6d7c8497d7235077bc"} Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.912693 4767 generic.go:334] "Generic (PLEG): container finished" podID="644effd9-c94f-46e2-8b1b-5077f66d023e" containerID="2c024080de5204ac3bc7215f0d73eb073f34e6b703d85c6d7c8497d7235077bc" exitCode=137 Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.913245 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"644effd9-c94f-46e2-8b1b-5077f66d023e","Type":"ContainerDied","Data":"fa26ef0cef1a49989379009b9a7f6e70937ade46b6bb511adb0741d4770924fb"} Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.913255 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa26ef0cef1a49989379009b9a7f6e70937ade46b6bb511adb0741d4770924fb" Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.913123 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.976879 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-combined-ca-bundle\") pod \"644effd9-c94f-46e2-8b1b-5077f66d023e\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.976949 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfghz\" (UniqueName: \"kubernetes.io/projected/644effd9-c94f-46e2-8b1b-5077f66d023e-kube-api-access-hfghz\") pod \"644effd9-c94f-46e2-8b1b-5077f66d023e\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.977236 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-config-data\") pod \"644effd9-c94f-46e2-8b1b-5077f66d023e\" (UID: \"644effd9-c94f-46e2-8b1b-5077f66d023e\") " Nov 24 21:58:50 crc kubenswrapper[4767]: I1124 21:58:50.983410 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644effd9-c94f-46e2-8b1b-5077f66d023e-kube-api-access-hfghz" (OuterVolumeSpecName: "kube-api-access-hfghz") pod "644effd9-c94f-46e2-8b1b-5077f66d023e" (UID: "644effd9-c94f-46e2-8b1b-5077f66d023e"). InnerVolumeSpecName "kube-api-access-hfghz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:51 crc kubenswrapper[4767]: I1124 21:58:51.012827 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-config-data" (OuterVolumeSpecName: "config-data") pod "644effd9-c94f-46e2-8b1b-5077f66d023e" (UID: "644effd9-c94f-46e2-8b1b-5077f66d023e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:51 crc kubenswrapper[4767]: I1124 21:58:51.017873 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "644effd9-c94f-46e2-8b1b-5077f66d023e" (UID: "644effd9-c94f-46e2-8b1b-5077f66d023e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:51 crc kubenswrapper[4767]: I1124 21:58:51.080631 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:51 crc kubenswrapper[4767]: I1124 21:58:51.080679 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfghz\" (UniqueName: \"kubernetes.io/projected/644effd9-c94f-46e2-8b1b-5077f66d023e-kube-api-access-hfghz\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:51 crc kubenswrapper[4767]: I1124 21:58:51.080699 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644effd9-c94f-46e2-8b1b-5077f66d023e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:51 crc kubenswrapper[4767]: I1124 21:58:51.923425 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:51 crc kubenswrapper[4767]: I1124 21:58:51.989253 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.000711 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.012905 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:52 crc kubenswrapper[4767]: E1124 21:58:52.013395 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644effd9-c94f-46e2-8b1b-5077f66d023e" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.013416 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="644effd9-c94f-46e2-8b1b-5077f66d023e" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.013668 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="644effd9-c94f-46e2-8b1b-5077f66d023e" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.014472 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.016578 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.016789 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.019839 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.025687 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.101292 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.101620 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.101792 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.101930 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.102049 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqql\" (UniqueName: \"kubernetes.io/projected/1f689eaf-9606-42fc-98cf-d69f82676ecf-kube-api-access-frqql\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.203596 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.203987 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.204258 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.205113 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.205223 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqql\" (UniqueName: \"kubernetes.io/projected/1f689eaf-9606-42fc-98cf-d69f82676ecf-kube-api-access-frqql\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.208347 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.208726 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.212788 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.213480 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f689eaf-9606-42fc-98cf-d69f82676ecf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.224299 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqql\" (UniqueName: \"kubernetes.io/projected/1f689eaf-9606-42fc-98cf-d69f82676ecf-kube-api-access-frqql\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f689eaf-9606-42fc-98cf-d69f82676ecf\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.325089 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644effd9-c94f-46e2-8b1b-5077f66d023e" path="/var/lib/kubelet/pods/644effd9-c94f-46e2-8b1b-5077f66d023e/volumes" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.340255 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.806999 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:58:52 crc kubenswrapper[4767]: I1124 21:58:52.940171 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f689eaf-9606-42fc-98cf-d69f82676ecf","Type":"ContainerStarted","Data":"279f3428a44824c1b78ed63159cafc1553177c0effd90a2f883735e4a27d4677"} Nov 24 21:58:53 crc kubenswrapper[4767]: I1124 21:58:53.951757 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f689eaf-9606-42fc-98cf-d69f82676ecf","Type":"ContainerStarted","Data":"1aec308f2e69a99e74e023565635e0405866823169757e9c6f9bc043f443d934"} Nov 24 21:58:53 crc kubenswrapper[4767]: I1124 21:58:53.982030 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.98200457 podStartE2EDuration="2.98200457s" podCreationTimestamp="2025-11-24 21:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:53.971890443 +0000 UTC m=+1216.888873855" watchObservedRunningTime="2025-11-24 21:58:53.98200457 +0000 UTC m=+1216.898987972" Nov 24 21:58:54 crc kubenswrapper[4767]: I1124 21:58:54.134230 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:58:54 crc kubenswrapper[4767]: I1124 21:58:54.134855 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:58:54 crc kubenswrapper[4767]: I1124 21:58:54.137339 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:58:54 crc kubenswrapper[4767]: I1124 21:58:54.138930 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:58:54 crc kubenswrapper[4767]: I1124 21:58:54.965902 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:58:54 crc kubenswrapper[4767]: I1124 21:58:54.971688 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.204056 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xhjdp"] Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.207059 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.219857 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xhjdp"] Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.276188 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.276251 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.276313 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/d66e31b6-987b-4d4f-a897-14bce551de92-kube-api-access-chrbt\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.276389 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.276484 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-config\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.276517 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.378203 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.378325 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-config\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.378354 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.378398 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.378421 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.378450 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/d66e31b6-987b-4d4f-a897-14bce551de92-kube-api-access-chrbt\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.379613 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-config\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.379682 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.379709 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.379733 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.379837 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.401903 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/d66e31b6-987b-4d4f-a897-14bce551de92-kube-api-access-chrbt\") pod \"dnsmasq-dns-89c5cd4d5-xhjdp\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:55 crc kubenswrapper[4767]: I1124 21:58:55.539923 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:56 crc kubenswrapper[4767]: I1124 21:58:56.027805 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xhjdp"] Nov 24 21:58:56 crc kubenswrapper[4767]: I1124 21:58:56.983064 4767 generic.go:334] "Generic (PLEG): container finished" podID="d66e31b6-987b-4d4f-a897-14bce551de92" containerID="78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e" exitCode=0 Nov 24 21:58:56 crc kubenswrapper[4767]: I1124 21:58:56.983167 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" event={"ID":"d66e31b6-987b-4d4f-a897-14bce551de92","Type":"ContainerDied","Data":"78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e"} Nov 24 21:58:56 crc kubenswrapper[4767]: I1124 21:58:56.983586 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" event={"ID":"d66e31b6-987b-4d4f-a897-14bce551de92","Type":"ContainerStarted","Data":"92f7ee6a659791a2c43886f743973df1b40a79e4dfd9afeff7b03edd3aaeedcc"} Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.021312 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.021756 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-central-agent" containerID="cri-o://85f1b3ff6b5bfe4b949a557a459161711560761cba52dbbe2edfc3132ae3bc20" gracePeriod=30 Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.022166 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="proxy-httpd" containerID="cri-o://7ed5c3346eca8c668d0777678911a7502aea92869a6103c793ad88a43a314ced" gracePeriod=30 Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.022322 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="sg-core" containerID="cri-o://514f28dedfd7148ec000b9c74fdb4bc81a5a6d70392a066d9d9a95b784def134" gracePeriod=30 Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.022418 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-notification-agent" containerID="cri-o://5c30ec5ec873ef3e7fe0fb72a1fd119c67775a25ddef3b37b9f654602defde7c" gracePeriod=30 Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.341402 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.704286 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.996892 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" event={"ID":"d66e31b6-987b-4d4f-a897-14bce551de92","Type":"ContainerStarted","Data":"25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332"} Nov 24 21:58:57 crc kubenswrapper[4767]: I1124 21:58:57.998486 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004590 4767 generic.go:334] "Generic (PLEG): container finished" podID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerID="7ed5c3346eca8c668d0777678911a7502aea92869a6103c793ad88a43a314ced" exitCode=0 Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004637 4767 generic.go:334] "Generic (PLEG): container finished" podID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerID="514f28dedfd7148ec000b9c74fdb4bc81a5a6d70392a066d9d9a95b784def134" exitCode=2 Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004645 4767 generic.go:334] "Generic (PLEG): container finished" podID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerID="5c30ec5ec873ef3e7fe0fb72a1fd119c67775a25ddef3b37b9f654602defde7c" exitCode=0 Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004641 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerDied","Data":"7ed5c3346eca8c668d0777678911a7502aea92869a6103c793ad88a43a314ced"} Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004703 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerDied","Data":"514f28dedfd7148ec000b9c74fdb4bc81a5a6d70392a066d9d9a95b784def134"} Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004717 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerDied","Data":"5c30ec5ec873ef3e7fe0fb72a1fd119c67775a25ddef3b37b9f654602defde7c"} Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004727 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerDied","Data":"85f1b3ff6b5bfe4b949a557a459161711560761cba52dbbe2edfc3132ae3bc20"} Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.004652 4767 generic.go:334] "Generic (PLEG): container finished" podID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerID="85f1b3ff6b5bfe4b949a557a459161711560761cba52dbbe2edfc3132ae3bc20" exitCode=0 Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.005248 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-log" containerID="cri-o://010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7" gracePeriod=30 Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.005346 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-api" containerID="cri-o://9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04" gracePeriod=30 Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.021999 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" podStartSLOduration=3.021980556 podStartE2EDuration="3.021980556s" podCreationTimestamp="2025-11-24 21:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:58:58.015354179 +0000 UTC m=+1220.932337571" watchObservedRunningTime="2025-11-24 21:58:58.021980556 +0000 UTC m=+1220.938963928" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.286477 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.347625 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-log-httpd\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.347761 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-sg-core-conf-yaml\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.347843 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-scripts\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.347925 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-run-httpd\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.347962 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-config-data\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.348007 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wjnj\" (UniqueName: \"kubernetes.io/projected/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-kube-api-access-4wjnj\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.348030 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-combined-ca-bundle\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.348124 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-ceilometer-tls-certs\") pod \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\" (UID: \"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a\") " Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.351106 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.351457 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.359213 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-scripts" (OuterVolumeSpecName: "scripts") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.363191 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-kube-api-access-4wjnj" (OuterVolumeSpecName: "kube-api-access-4wjnj") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "kube-api-access-4wjnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.407449 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.422389 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.451141 4767 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.451182 4767 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.451194 4767 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.451206 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.451218 4767 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.451230 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wjnj\" (UniqueName: \"kubernetes.io/projected/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-kube-api-access-4wjnj\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.487786 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-config-data" (OuterVolumeSpecName: "config-data") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.504527 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" (UID: "da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.553326 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:58 crc kubenswrapper[4767]: I1124 21:58:58.553363 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.028542 4767 generic.go:334] "Generic (PLEG): container finished" podID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerID="010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7" exitCode=143 Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.028641 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52a80b76-4615-4eda-8b74-d434aaa8932f","Type":"ContainerDied","Data":"010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7"} Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.034764 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.034788 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a","Type":"ContainerDied","Data":"880e0c95cf33c5bfb4ee0230b1f20ca220f5fd3eb62649ce9caffafd881277b8"} Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.034835 4767 scope.go:117] "RemoveContainer" containerID="7ed5c3346eca8c668d0777678911a7502aea92869a6103c793ad88a43a314ced" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.062864 4767 scope.go:117] "RemoveContainer" containerID="514f28dedfd7148ec000b9c74fdb4bc81a5a6d70392a066d9d9a95b784def134" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.079029 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.091857 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.102607 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.103173 4767 scope.go:117] "RemoveContainer" containerID="5c30ec5ec873ef3e7fe0fb72a1fd119c67775a25ddef3b37b9f654602defde7c" Nov 24 21:58:59 crc kubenswrapper[4767]: E1124 21:58:59.103406 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="sg-core" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.103435 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="sg-core" Nov 24 21:58:59 crc kubenswrapper[4767]: E1124 21:58:59.103515 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-central-agent" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.103557 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-central-agent" Nov 24 21:58:59 crc kubenswrapper[4767]: E1124 21:58:59.103589 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-notification-agent" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.103609 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-notification-agent" Nov 24 21:58:59 crc kubenswrapper[4767]: E1124 21:58:59.103632 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="proxy-httpd" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.103645 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="proxy-httpd" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.104740 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-notification-agent" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.104773 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="sg-core" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.104805 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="ceilometer-central-agent" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.104826 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" containerName="proxy-httpd" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.108353 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.112738 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.112878 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.114425 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.130952 4767 scope.go:117] "RemoveContainer" containerID="85f1b3ff6b5bfe4b949a557a459161711560761cba52dbbe2edfc3132ae3bc20" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.134434 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.266382 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.266597 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-scripts\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.266637 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.266698 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be11d4b-9b77-43f3-9085-9b8ec61f3018-log-httpd\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.266774 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be11d4b-9b77-43f3-9085-9b8ec61f3018-run-httpd\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.267002 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.267207 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-config-data\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.267334 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mmsl\" (UniqueName: \"kubernetes.io/projected/0be11d4b-9b77-43f3-9085-9b8ec61f3018-kube-api-access-8mmsl\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.369433 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.370002 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-config-data\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.370137 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mmsl\" (UniqueName: \"kubernetes.io/projected/0be11d4b-9b77-43f3-9085-9b8ec61f3018-kube-api-access-8mmsl\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.370325 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.370475 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.370543 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-scripts\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.370634 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be11d4b-9b77-43f3-9085-9b8ec61f3018-log-httpd\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.370721 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be11d4b-9b77-43f3-9085-9b8ec61f3018-run-httpd\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.371725 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be11d4b-9b77-43f3-9085-9b8ec61f3018-run-httpd\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.372019 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be11d4b-9b77-43f3-9085-9b8ec61f3018-log-httpd\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.374823 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-scripts\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.376555 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-config-data\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.376607 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.383408 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.391530 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be11d4b-9b77-43f3-9085-9b8ec61f3018-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.394451 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mmsl\" (UniqueName: \"kubernetes.io/projected/0be11d4b-9b77-43f3-9085-9b8ec61f3018-kube-api-access-8mmsl\") pod \"ceilometer-0\" (UID: \"0be11d4b-9b77-43f3-9085-9b8ec61f3018\") " pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.436343 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:58:59 crc kubenswrapper[4767]: I1124 21:58:59.894072 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:59:00 crc kubenswrapper[4767]: I1124 21:59:00.043960 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be11d4b-9b77-43f3-9085-9b8ec61f3018","Type":"ContainerStarted","Data":"3b0c4431426b481e757d96e76070b225cb07835e250749849cd3cbe6d4f23806"} Nov 24 21:59:00 crc kubenswrapper[4767]: I1124 21:59:00.324063 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a" path="/var/lib/kubelet/pods/da679f0c-cfa0-49a0-b8ca-bfe7ee7e0f2a/volumes" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.073138 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be11d4b-9b77-43f3-9085-9b8ec61f3018","Type":"ContainerStarted","Data":"87322e78fe323c95b7e85efc8c0bdb4d492ce8ff456057bd9df3d48961d24123"} Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.599203 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.718466 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-config-data\") pod \"52a80b76-4615-4eda-8b74-d434aaa8932f\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.718823 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-combined-ca-bundle\") pod \"52a80b76-4615-4eda-8b74-d434aaa8932f\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.718990 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn4zf\" (UniqueName: \"kubernetes.io/projected/52a80b76-4615-4eda-8b74-d434aaa8932f-kube-api-access-zn4zf\") pod \"52a80b76-4615-4eda-8b74-d434aaa8932f\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.719240 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a80b76-4615-4eda-8b74-d434aaa8932f-logs\") pod \"52a80b76-4615-4eda-8b74-d434aaa8932f\" (UID: \"52a80b76-4615-4eda-8b74-d434aaa8932f\") " Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.719791 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52a80b76-4615-4eda-8b74-d434aaa8932f-logs" (OuterVolumeSpecName: "logs") pod "52a80b76-4615-4eda-8b74-d434aaa8932f" (UID: "52a80b76-4615-4eda-8b74-d434aaa8932f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.720164 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a80b76-4615-4eda-8b74-d434aaa8932f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.724431 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a80b76-4615-4eda-8b74-d434aaa8932f-kube-api-access-zn4zf" (OuterVolumeSpecName: "kube-api-access-zn4zf") pod "52a80b76-4615-4eda-8b74-d434aaa8932f" (UID: "52a80b76-4615-4eda-8b74-d434aaa8932f"). InnerVolumeSpecName "kube-api-access-zn4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.758639 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-config-data" (OuterVolumeSpecName: "config-data") pod "52a80b76-4615-4eda-8b74-d434aaa8932f" (UID: "52a80b76-4615-4eda-8b74-d434aaa8932f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.766910 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52a80b76-4615-4eda-8b74-d434aaa8932f" (UID: "52a80b76-4615-4eda-8b74-d434aaa8932f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.821822 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.822068 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a80b76-4615-4eda-8b74-d434aaa8932f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:01 crc kubenswrapper[4767]: I1124 21:59:01.822129 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn4zf\" (UniqueName: \"kubernetes.io/projected/52a80b76-4615-4eda-8b74-d434aaa8932f-kube-api-access-zn4zf\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.083707 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be11d4b-9b77-43f3-9085-9b8ec61f3018","Type":"ContainerStarted","Data":"967cdfeb9c71dc44eed0c7d9a83f12a1e044897b5414ad246b67ce97565693af"} Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.083775 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be11d4b-9b77-43f3-9085-9b8ec61f3018","Type":"ContainerStarted","Data":"5ebf34c0dea5c7ed9cf94f29032f575c09f12b73e4b702286dc0baf51342d3b0"} Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.085634 4767 generic.go:334] "Generic (PLEG): container finished" podID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerID="9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04" exitCode=0 Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.085688 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.085680 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52a80b76-4615-4eda-8b74-d434aaa8932f","Type":"ContainerDied","Data":"9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04"} Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.085806 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52a80b76-4615-4eda-8b74-d434aaa8932f","Type":"ContainerDied","Data":"5b189babddd8a0911a9c9304dc806aa58efce3fd9d8426da056116cc7492b899"} Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.085827 4767 scope.go:117] "RemoveContainer" containerID="9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.109385 4767 scope.go:117] "RemoveContainer" containerID="010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.133558 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.136857 4767 scope.go:117] "RemoveContainer" containerID="9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04" Nov 24 21:59:02 crc kubenswrapper[4767]: E1124 21:59:02.138343 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04\": container with ID starting with 9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04 not found: ID does not exist" containerID="9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.138382 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04"} err="failed to get container status \"9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04\": rpc error: code = NotFound desc = could not find container \"9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04\": container with ID starting with 9ab7960847d5d073780726b1a574ac5b43cf699e175a646e140cd5772d44bc04 not found: ID does not exist" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.138406 4767 scope.go:117] "RemoveContainer" containerID="010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7" Nov 24 21:59:02 crc kubenswrapper[4767]: E1124 21:59:02.138807 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7\": container with ID starting with 010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7 not found: ID does not exist" containerID="010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.138841 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7"} err="failed to get container status \"010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7\": rpc error: code = NotFound desc = could not find container \"010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7\": container with ID starting with 010cc6d8e04525ce0517daf04ae3d1d9e1189cb5c763f8aab41abda282a421c7 not found: ID does not exist" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.149474 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.164443 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:02 crc kubenswrapper[4767]: E1124 21:59:02.164984 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-api" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.165001 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-api" Nov 24 21:59:02 crc kubenswrapper[4767]: E1124 21:59:02.165034 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-log" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.165042 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-log" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.165327 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-api" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.165356 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" containerName="nova-api-log" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.166674 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.168743 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.168887 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.169590 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.174499 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.230599 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.230765 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-config-data\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.230807 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-logs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.230866 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.230936 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.230969 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k52n\" (UniqueName: \"kubernetes.io/projected/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-kube-api-access-7k52n\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.333968 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.334330 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.334362 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k52n\" (UniqueName: \"kubernetes.io/projected/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-kube-api-access-7k52n\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.334431 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.334486 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-config-data\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.334507 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-logs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.334976 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-logs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.335829 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a80b76-4615-4eda-8b74-d434aaa8932f" path="/var/lib/kubelet/pods/52a80b76-4615-4eda-8b74-d434aaa8932f/volumes" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.337960 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.338062 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-config-data\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.338611 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.339209 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.341392 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.354941 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k52n\" (UniqueName: \"kubernetes.io/projected/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-kube-api-access-7k52n\") pod \"nova-api-0\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.364839 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.484088 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:02 crc kubenswrapper[4767]: I1124 21:59:02.930787 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:02 crc kubenswrapper[4767]: W1124 21:59:02.932887 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d7ff23e_ef1b_4a83_b7b1_34355cee8f8e.slice/crio-c83768aad3342e66210cc32de1e3bb5a48bf715febb2bbcc35d8165cc84effe6 WatchSource:0}: Error finding container c83768aad3342e66210cc32de1e3bb5a48bf715febb2bbcc35d8165cc84effe6: Status 404 returned error can't find the container with id c83768aad3342e66210cc32de1e3bb5a48bf715febb2bbcc35d8165cc84effe6 Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.098057 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e","Type":"ContainerStarted","Data":"c83768aad3342e66210cc32de1e3bb5a48bf715febb2bbcc35d8165cc84effe6"} Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.115343 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.273903 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-7rbl9"] Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.275322 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.278345 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.278866 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.292903 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7rbl9"] Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.363852 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.364111 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-scripts\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.364306 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hnw\" (UniqueName: \"kubernetes.io/projected/4490c175-4526-4747-a9f3-72d5a757cda9-kube-api-access-52hnw\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.364336 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-config-data\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.465993 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.466055 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-scripts\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.466194 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52hnw\" (UniqueName: \"kubernetes.io/projected/4490c175-4526-4747-a9f3-72d5a757cda9-kube-api-access-52hnw\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.466221 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-config-data\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.471512 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.471567 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-config-data\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.474914 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-scripts\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.496966 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52hnw\" (UniqueName: \"kubernetes.io/projected/4490c175-4526-4747-a9f3-72d5a757cda9-kube-api-access-52hnw\") pod \"nova-cell1-cell-mapping-7rbl9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:03 crc kubenswrapper[4767]: I1124 21:59:03.601508 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:04 crc kubenswrapper[4767]: I1124 21:59:04.100220 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7rbl9"] Nov 24 21:59:04 crc kubenswrapper[4767]: I1124 21:59:04.119965 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e","Type":"ContainerStarted","Data":"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304"} Nov 24 21:59:04 crc kubenswrapper[4767]: I1124 21:59:04.120204 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e","Type":"ContainerStarted","Data":"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928"} Nov 24 21:59:04 crc kubenswrapper[4767]: I1124 21:59:04.125717 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be11d4b-9b77-43f3-9085-9b8ec61f3018","Type":"ContainerStarted","Data":"27287b200b9eafed0c3928a5d516c8e8232553447dc898dc3dc4bde3321c0147"} Nov 24 21:59:04 crc kubenswrapper[4767]: I1124 21:59:04.159044 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.159022495 podStartE2EDuration="2.159022495s" podCreationTimestamp="2025-11-24 21:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:59:04.147261252 +0000 UTC m=+1227.064244624" watchObservedRunningTime="2025-11-24 21:59:04.159022495 +0000 UTC m=+1227.076005867" Nov 24 21:59:04 crc kubenswrapper[4767]: I1124 21:59:04.182196 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.712485618 podStartE2EDuration="5.182171431s" podCreationTimestamp="2025-11-24 21:58:59 +0000 UTC" firstStartedPulling="2025-11-24 21:58:59.911088857 +0000 UTC m=+1222.828072229" lastFinishedPulling="2025-11-24 21:59:03.38077467 +0000 UTC m=+1226.297758042" observedRunningTime="2025-11-24 21:59:04.170799178 +0000 UTC m=+1227.087782570" watchObservedRunningTime="2025-11-24 21:59:04.182171431 +0000 UTC m=+1227.099154823" Nov 24 21:59:05 crc kubenswrapper[4767]: I1124 21:59:05.144211 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7rbl9" event={"ID":"4490c175-4526-4747-a9f3-72d5a757cda9","Type":"ContainerStarted","Data":"4ce87428b6b914247bdb63237497193e1ee33b90a0c29370a2f8e98dd8342a21"} Nov 24 21:59:05 crc kubenswrapper[4767]: I1124 21:59:05.146951 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7rbl9" event={"ID":"4490c175-4526-4747-a9f3-72d5a757cda9","Type":"ContainerStarted","Data":"df8b426fa38a96d1e6ab855312cb34237935117b55789d8d8cfddd818b017796"} Nov 24 21:59:05 crc kubenswrapper[4767]: I1124 21:59:05.147119 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:59:05 crc kubenswrapper[4767]: I1124 21:59:05.170622 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-7rbl9" podStartSLOduration=2.170602429 podStartE2EDuration="2.170602429s" podCreationTimestamp="2025-11-24 21:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:59:05.163741355 +0000 UTC m=+1228.080724777" watchObservedRunningTime="2025-11-24 21:59:05.170602429 +0000 UTC m=+1228.087585801" Nov 24 21:59:05 crc kubenswrapper[4767]: I1124 21:59:05.542074 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 21:59:05 crc kubenswrapper[4767]: I1124 21:59:05.602649 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dh6cv"] Nov 24 21:59:05 crc kubenswrapper[4767]: I1124 21:59:05.602873 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" podUID="93f39202-b69a-4038-b366-58612af46372" containerName="dnsmasq-dns" containerID="cri-o://726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689" gracePeriod=10 Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.165146 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.166765 4767 generic.go:334] "Generic (PLEG): container finished" podID="93f39202-b69a-4038-b366-58612af46372" containerID="726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689" exitCode=0 Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.166827 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" event={"ID":"93f39202-b69a-4038-b366-58612af46372","Type":"ContainerDied","Data":"726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689"} Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.166855 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" event={"ID":"93f39202-b69a-4038-b366-58612af46372","Type":"ContainerDied","Data":"da7627da2c7d7efedb593b34cef9ef396347663f50c0b270e5eb7a2c70bbcd72"} Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.166873 4767 scope.go:117] "RemoveContainer" containerID="726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.196576 4767 scope.go:117] "RemoveContainer" containerID="e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.243826 4767 scope.go:117] "RemoveContainer" containerID="726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689" Nov 24 21:59:06 crc kubenswrapper[4767]: E1124 21:59:06.244922 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689\": container with ID starting with 726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689 not found: ID does not exist" containerID="726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.244961 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689"} err="failed to get container status \"726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689\": rpc error: code = NotFound desc = could not find container \"726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689\": container with ID starting with 726d8afa764e284af7d8f0e855597469b03b845d5648ed5b24e71944d2fd3689 not found: ID does not exist" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.244988 4767 scope.go:117] "RemoveContainer" containerID="e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6" Nov 24 21:59:06 crc kubenswrapper[4767]: E1124 21:59:06.245312 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6\": container with ID starting with e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6 not found: ID does not exist" containerID="e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.245343 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6"} err="failed to get container status \"e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6\": rpc error: code = NotFound desc = could not find container \"e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6\": container with ID starting with e6064d94acbea7e3df4283686287827ab1e8e4d13841c42270e1af0ddc23c8f6 not found: ID does not exist" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.321707 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-swift-storage-0\") pod \"93f39202-b69a-4038-b366-58612af46372\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.321971 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-config\") pod \"93f39202-b69a-4038-b366-58612af46372\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.322186 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww2wl\" (UniqueName: \"kubernetes.io/projected/93f39202-b69a-4038-b366-58612af46372-kube-api-access-ww2wl\") pod \"93f39202-b69a-4038-b366-58612af46372\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.322313 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-sb\") pod \"93f39202-b69a-4038-b366-58612af46372\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.322814 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-nb\") pod \"93f39202-b69a-4038-b366-58612af46372\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.323050 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-svc\") pod \"93f39202-b69a-4038-b366-58612af46372\" (UID: \"93f39202-b69a-4038-b366-58612af46372\") " Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.342231 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f39202-b69a-4038-b366-58612af46372-kube-api-access-ww2wl" (OuterVolumeSpecName: "kube-api-access-ww2wl") pod "93f39202-b69a-4038-b366-58612af46372" (UID: "93f39202-b69a-4038-b366-58612af46372"). InnerVolumeSpecName "kube-api-access-ww2wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.384420 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-config" (OuterVolumeSpecName: "config") pod "93f39202-b69a-4038-b366-58612af46372" (UID: "93f39202-b69a-4038-b366-58612af46372"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.389215 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93f39202-b69a-4038-b366-58612af46372" (UID: "93f39202-b69a-4038-b366-58612af46372"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.396092 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93f39202-b69a-4038-b366-58612af46372" (UID: "93f39202-b69a-4038-b366-58612af46372"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.403307 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93f39202-b69a-4038-b366-58612af46372" (UID: "93f39202-b69a-4038-b366-58612af46372"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.413481 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93f39202-b69a-4038-b366-58612af46372" (UID: "93f39202-b69a-4038-b366-58612af46372"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.425180 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww2wl\" (UniqueName: \"kubernetes.io/projected/93f39202-b69a-4038-b366-58612af46372-kube-api-access-ww2wl\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.425493 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.425502 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.425512 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.425521 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:06 crc kubenswrapper[4767]: I1124 21:59:06.425530 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f39202-b69a-4038-b366-58612af46372-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:07 crc kubenswrapper[4767]: I1124 21:59:07.180341 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dh6cv" Nov 24 21:59:07 crc kubenswrapper[4767]: I1124 21:59:07.221411 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dh6cv"] Nov 24 21:59:07 crc kubenswrapper[4767]: I1124 21:59:07.233969 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dh6cv"] Nov 24 21:59:08 crc kubenswrapper[4767]: I1124 21:59:08.328686 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f39202-b69a-4038-b366-58612af46372" path="/var/lib/kubelet/pods/93f39202-b69a-4038-b366-58612af46372/volumes" Nov 24 21:59:09 crc kubenswrapper[4767]: I1124 21:59:09.208676 4767 generic.go:334] "Generic (PLEG): container finished" podID="4490c175-4526-4747-a9f3-72d5a757cda9" containerID="4ce87428b6b914247bdb63237497193e1ee33b90a0c29370a2f8e98dd8342a21" exitCode=0 Nov 24 21:59:09 crc kubenswrapper[4767]: I1124 21:59:09.208756 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7rbl9" event={"ID":"4490c175-4526-4747-a9f3-72d5a757cda9","Type":"ContainerDied","Data":"4ce87428b6b914247bdb63237497193e1ee33b90a0c29370a2f8e98dd8342a21"} Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.622294 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.750351 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-scripts\") pod \"4490c175-4526-4747-a9f3-72d5a757cda9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.750816 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52hnw\" (UniqueName: \"kubernetes.io/projected/4490c175-4526-4747-a9f3-72d5a757cda9-kube-api-access-52hnw\") pod \"4490c175-4526-4747-a9f3-72d5a757cda9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.751348 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-config-data\") pod \"4490c175-4526-4747-a9f3-72d5a757cda9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.751543 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-combined-ca-bundle\") pod \"4490c175-4526-4747-a9f3-72d5a757cda9\" (UID: \"4490c175-4526-4747-a9f3-72d5a757cda9\") " Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.757968 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4490c175-4526-4747-a9f3-72d5a757cda9-kube-api-access-52hnw" (OuterVolumeSpecName: "kube-api-access-52hnw") pod "4490c175-4526-4747-a9f3-72d5a757cda9" (UID: "4490c175-4526-4747-a9f3-72d5a757cda9"). InnerVolumeSpecName "kube-api-access-52hnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.758139 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-scripts" (OuterVolumeSpecName: "scripts") pod "4490c175-4526-4747-a9f3-72d5a757cda9" (UID: "4490c175-4526-4747-a9f3-72d5a757cda9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.785932 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4490c175-4526-4747-a9f3-72d5a757cda9" (UID: "4490c175-4526-4747-a9f3-72d5a757cda9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.796669 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-config-data" (OuterVolumeSpecName: "config-data") pod "4490c175-4526-4747-a9f3-72d5a757cda9" (UID: "4490c175-4526-4747-a9f3-72d5a757cda9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.854093 4767 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.854141 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52hnw\" (UniqueName: \"kubernetes.io/projected/4490c175-4526-4747-a9f3-72d5a757cda9-kube-api-access-52hnw\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.854157 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:10 crc kubenswrapper[4767]: I1124 21:59:10.854172 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4490c175-4526-4747-a9f3-72d5a757cda9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.236496 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7rbl9" event={"ID":"4490c175-4526-4747-a9f3-72d5a757cda9","Type":"ContainerDied","Data":"df8b426fa38a96d1e6ab855312cb34237935117b55789d8d8cfddd818b017796"} Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.236552 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7rbl9" Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.236558 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df8b426fa38a96d1e6ab855312cb34237935117b55789d8d8cfddd818b017796" Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.413342 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.413937 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-log" containerID="cri-o://24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928" gracePeriod=30 Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.414021 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-api" containerID="cri-o://21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304" gracePeriod=30 Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.427737 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.428083 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="21796dd9-fa59-45d8-a276-b4e35f1fcaae" containerName="nova-scheduler-scheduler" containerID="cri-o://8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" gracePeriod=30 Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.448049 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.448283 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-log" containerID="cri-o://b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345" gracePeriod=30 Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.448403 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-metadata" containerID="cri-o://ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a" gracePeriod=30 Nov 24 21:59:11 crc kubenswrapper[4767]: I1124 21:59:11.986843 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.083760 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-public-tls-certs\") pod \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.083927 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k52n\" (UniqueName: \"kubernetes.io/projected/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-kube-api-access-7k52n\") pod \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.083948 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-combined-ca-bundle\") pod \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.084039 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-internal-tls-certs\") pod \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.084083 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-config-data\") pod \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.084189 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-logs\") pod \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\" (UID: \"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e\") " Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.084469 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-logs" (OuterVolumeSpecName: "logs") pod "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" (UID: "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.084913 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.089009 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-kube-api-access-7k52n" (OuterVolumeSpecName: "kube-api-access-7k52n") pod "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" (UID: "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e"). InnerVolumeSpecName "kube-api-access-7k52n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.116689 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" (UID: "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.117948 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-config-data" (OuterVolumeSpecName: "config-data") pod "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" (UID: "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.141019 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" (UID: "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.149098 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" (UID: "2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.186196 4767 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.186420 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k52n\" (UniqueName: \"kubernetes.io/projected/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-kube-api-access-7k52n\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.186526 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.186600 4767 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.186659 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.261019 4767 generic.go:334] "Generic (PLEG): container finished" podID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerID="b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345" exitCode=143 Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.261100 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f22ba9ab-6fb1-42bf-afe8-80090a611d52","Type":"ContainerDied","Data":"b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345"} Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.263766 4767 generic.go:334] "Generic (PLEG): container finished" podID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerID="21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304" exitCode=0 Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.263803 4767 generic.go:334] "Generic (PLEG): container finished" podID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerID="24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928" exitCode=143 Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.263829 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e","Type":"ContainerDied","Data":"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304"} Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.263874 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e","Type":"ContainerDied","Data":"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928"} Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.263887 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e","Type":"ContainerDied","Data":"c83768aad3342e66210cc32de1e3bb5a48bf715febb2bbcc35d8165cc84effe6"} Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.263907 4767 scope.go:117] "RemoveContainer" containerID="21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.264039 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.288673 4767 scope.go:117] "RemoveContainer" containerID="24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.326536 4767 scope.go:117] "RemoveContainer" containerID="21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304" Nov 24 21:59:12 crc kubenswrapper[4767]: E1124 21:59:12.327820 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304\": container with ID starting with 21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304 not found: ID does not exist" containerID="21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.327862 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304"} err="failed to get container status \"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304\": rpc error: code = NotFound desc = could not find container \"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304\": container with ID starting with 21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304 not found: ID does not exist" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.327890 4767 scope.go:117] "RemoveContainer" containerID="24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928" Nov 24 21:59:12 crc kubenswrapper[4767]: E1124 21:59:12.328247 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928\": container with ID starting with 24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928 not found: ID does not exist" containerID="24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.328309 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928"} err="failed to get container status \"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928\": rpc error: code = NotFound desc = could not find container \"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928\": container with ID starting with 24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928 not found: ID does not exist" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.328330 4767 scope.go:117] "RemoveContainer" containerID="21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.328955 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304"} err="failed to get container status \"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304\": rpc error: code = NotFound desc = could not find container \"21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304\": container with ID starting with 21ca611466c62e28f6aabcf657c64d043d495554be6ec121900311ce7b0cd304 not found: ID does not exist" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.328976 4767 scope.go:117] "RemoveContainer" containerID="24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.334556 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928"} err="failed to get container status \"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928\": rpc error: code = NotFound desc = could not find container \"24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928\": container with ID starting with 24b57f0f8275dc2eb54cbafcd36b4b392fcf71d6977057f40169e407bc5fe928 not found: ID does not exist" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.338825 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.338860 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.341868 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:12 crc kubenswrapper[4767]: E1124 21:59:12.342417 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-api" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342433 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-api" Nov 24 21:59:12 crc kubenswrapper[4767]: E1124 21:59:12.342452 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-log" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342459 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-log" Nov 24 21:59:12 crc kubenswrapper[4767]: E1124 21:59:12.342476 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4490c175-4526-4747-a9f3-72d5a757cda9" containerName="nova-manage" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342483 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4490c175-4526-4747-a9f3-72d5a757cda9" containerName="nova-manage" Nov 24 21:59:12 crc kubenswrapper[4767]: E1124 21:59:12.342501 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f39202-b69a-4038-b366-58612af46372" containerName="init" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342508 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f39202-b69a-4038-b366-58612af46372" containerName="init" Nov 24 21:59:12 crc kubenswrapper[4767]: E1124 21:59:12.342531 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f39202-b69a-4038-b366-58612af46372" containerName="dnsmasq-dns" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342539 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f39202-b69a-4038-b366-58612af46372" containerName="dnsmasq-dns" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342805 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-api" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342820 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f39202-b69a-4038-b366-58612af46372" containerName="dnsmasq-dns" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342837 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" containerName="nova-api-log" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.342856 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4490c175-4526-4747-a9f3-72d5a757cda9" containerName="nova-manage" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.344194 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.346550 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.347441 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.347957 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.355143 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.490681 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-public-tls-certs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.490928 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7gr\" (UniqueName: \"kubernetes.io/projected/102497e1-cf13-4ed2-8976-ac528dbc6c82-kube-api-access-mw7gr\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.491066 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-config-data\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.491165 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102497e1-cf13-4ed2-8976-ac528dbc6c82-logs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.491246 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-internal-tls-certs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.491357 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.593093 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-public-tls-certs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.593151 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7gr\" (UniqueName: \"kubernetes.io/projected/102497e1-cf13-4ed2-8976-ac528dbc6c82-kube-api-access-mw7gr\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.593199 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-config-data\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.593244 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102497e1-cf13-4ed2-8976-ac528dbc6c82-logs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.593270 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-internal-tls-certs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.593326 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.593886 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102497e1-cf13-4ed2-8976-ac528dbc6c82-logs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.597007 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.597039 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-public-tls-certs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.597552 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-config-data\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.597822 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102497e1-cf13-4ed2-8976-ac528dbc6c82-internal-tls-certs\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.612485 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7gr\" (UniqueName: \"kubernetes.io/projected/102497e1-cf13-4ed2-8976-ac528dbc6c82-kube-api-access-mw7gr\") pod \"nova-api-0\" (UID: \"102497e1-cf13-4ed2-8976-ac528dbc6c82\") " pod="openstack/nova-api-0" Nov 24 21:59:12 crc kubenswrapper[4767]: I1124 21:59:12.668123 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:59:13 crc kubenswrapper[4767]: W1124 21:59:13.078954 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod102497e1_cf13_4ed2_8976_ac528dbc6c82.slice/crio-d0edb3f13f89907931d01549a679f5bf5b784ce9302642e622c92f5d8bb2a298 WatchSource:0}: Error finding container d0edb3f13f89907931d01549a679f5bf5b784ce9302642e622c92f5d8bb2a298: Status 404 returned error can't find the container with id d0edb3f13f89907931d01549a679f5bf5b784ce9302642e622c92f5d8bb2a298 Nov 24 21:59:13 crc kubenswrapper[4767]: I1124 21:59:13.080225 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:59:13 crc kubenswrapper[4767]: E1124 21:59:13.121054 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:59:13 crc kubenswrapper[4767]: E1124 21:59:13.122553 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:59:13 crc kubenswrapper[4767]: E1124 21:59:13.123982 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:59:13 crc kubenswrapper[4767]: E1124 21:59:13.124033 4767 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="21796dd9-fa59-45d8-a276-b4e35f1fcaae" containerName="nova-scheduler-scheduler" Nov 24 21:59:13 crc kubenswrapper[4767]: I1124 21:59:13.277515 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"102497e1-cf13-4ed2-8976-ac528dbc6c82","Type":"ContainerStarted","Data":"d0edb3f13f89907931d01549a679f5bf5b784ce9302642e622c92f5d8bb2a298"} Nov 24 21:59:14 crc kubenswrapper[4767]: I1124 21:59:14.291528 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"102497e1-cf13-4ed2-8976-ac528dbc6c82","Type":"ContainerStarted","Data":"105d79fdb6dd89e698a1921fd31cde4568fdde365519c95a0531610c8b7d0622"} Nov 24 21:59:14 crc kubenswrapper[4767]: I1124 21:59:14.291857 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"102497e1-cf13-4ed2-8976-ac528dbc6c82","Type":"ContainerStarted","Data":"8943f5aafe91e00a29aa254fa58f41bc76ef7ee686dfd80e7a5a619263c310a4"} Nov 24 21:59:14 crc kubenswrapper[4767]: I1124 21:59:14.318066 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.318048525 podStartE2EDuration="2.318048525s" podCreationTimestamp="2025-11-24 21:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:59:14.314092733 +0000 UTC m=+1237.231076125" watchObservedRunningTime="2025-11-24 21:59:14.318048525 +0000 UTC m=+1237.235031897" Nov 24 21:59:14 crc kubenswrapper[4767]: I1124 21:59:14.328439 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e" path="/var/lib/kubelet/pods/2d7ff23e-ef1b-4a83-b7b1-34355cee8f8e/volumes" Nov 24 21:59:14 crc kubenswrapper[4767]: I1124 21:59:14.578250 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:49676->10.217.0.216:8775: read: connection reset by peer" Nov 24 21:59:14 crc kubenswrapper[4767]: I1124 21:59:14.578733 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:49678->10.217.0.216:8775: read: connection reset by peer" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.020611 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.150603 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjh9v\" (UniqueName: \"kubernetes.io/projected/f22ba9ab-6fb1-42bf-afe8-80090a611d52-kube-api-access-gjh9v\") pod \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.150728 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-combined-ca-bundle\") pod \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.150795 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-config-data\") pod \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.150844 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-nova-metadata-tls-certs\") pod \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.150890 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22ba9ab-6fb1-42bf-afe8-80090a611d52-logs\") pod \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\" (UID: \"f22ba9ab-6fb1-42bf-afe8-80090a611d52\") " Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.151524 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f22ba9ab-6fb1-42bf-afe8-80090a611d52-logs" (OuterVolumeSpecName: "logs") pod "f22ba9ab-6fb1-42bf-afe8-80090a611d52" (UID: "f22ba9ab-6fb1-42bf-afe8-80090a611d52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.161713 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22ba9ab-6fb1-42bf-afe8-80090a611d52-kube-api-access-gjh9v" (OuterVolumeSpecName: "kube-api-access-gjh9v") pod "f22ba9ab-6fb1-42bf-afe8-80090a611d52" (UID: "f22ba9ab-6fb1-42bf-afe8-80090a611d52"). InnerVolumeSpecName "kube-api-access-gjh9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.185997 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f22ba9ab-6fb1-42bf-afe8-80090a611d52" (UID: "f22ba9ab-6fb1-42bf-afe8-80090a611d52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.187512 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-config-data" (OuterVolumeSpecName: "config-data") pod "f22ba9ab-6fb1-42bf-afe8-80090a611d52" (UID: "f22ba9ab-6fb1-42bf-afe8-80090a611d52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.211736 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f22ba9ab-6fb1-42bf-afe8-80090a611d52" (UID: "f22ba9ab-6fb1-42bf-afe8-80090a611d52"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.253544 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.253934 4767 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.253947 4767 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f22ba9ab-6fb1-42bf-afe8-80090a611d52-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.253961 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjh9v\" (UniqueName: \"kubernetes.io/projected/f22ba9ab-6fb1-42bf-afe8-80090a611d52-kube-api-access-gjh9v\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.253972 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ba9ab-6fb1-42bf-afe8-80090a611d52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.300860 4767 generic.go:334] "Generic (PLEG): container finished" podID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerID="ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a" exitCode=0 Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.300943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f22ba9ab-6fb1-42bf-afe8-80090a611d52","Type":"ContainerDied","Data":"ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a"} Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.302018 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f22ba9ab-6fb1-42bf-afe8-80090a611d52","Type":"ContainerDied","Data":"c4cd020881e4855e91795dcc66f0ae06d2f78158f1f63aef0c8a0a08ad9b4cd0"} Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.302047 4767 scope.go:117] "RemoveContainer" containerID="ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.300989 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.324037 4767 scope.go:117] "RemoveContainer" containerID="b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.336740 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.346177 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.356499 4767 scope.go:117] "RemoveContainer" containerID="ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.356707 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:59:15 crc kubenswrapper[4767]: E1124 21:59:15.357014 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a\": container with ID starting with ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a not found: ID does not exist" containerID="ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.357050 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a"} err="failed to get container status \"ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a\": rpc error: code = NotFound desc = could not find container \"ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a\": container with ID starting with ceecd8c8da71c2175fa360bdbdc80c9638ac540081302d41f9453782f207931a not found: ID does not exist" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.357070 4767 scope.go:117] "RemoveContainer" containerID="b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345" Nov 24 21:59:15 crc kubenswrapper[4767]: E1124 21:59:15.357077 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-log" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.357089 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-log" Nov 24 21:59:15 crc kubenswrapper[4767]: E1124 21:59:15.357121 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-metadata" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.357129 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-metadata" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.357326 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-log" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.357351 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" containerName="nova-metadata-metadata" Nov 24 21:59:15 crc kubenswrapper[4767]: E1124 21:59:15.357501 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345\": container with ID starting with b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345 not found: ID does not exist" containerID="b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.357560 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345"} err="failed to get container status \"b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345\": rpc error: code = NotFound desc = could not find container \"b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345\": container with ID starting with b2dc4d8fc17024604cfbab7ad8ae1f61edf35e83af06ed4d0ae44465bf508345 not found: ID does not exist" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.358354 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.364484 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.364576 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.379095 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.460246 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x89qk\" (UniqueName: \"kubernetes.io/projected/0af582ee-37f6-41fa-882e-a11eab5c4f29-kube-api-access-x89qk\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.460657 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.460893 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-config-data\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.460986 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0af582ee-37f6-41fa-882e-a11eab5c4f29-logs\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.461124 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.563365 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.563439 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x89qk\" (UniqueName: \"kubernetes.io/projected/0af582ee-37f6-41fa-882e-a11eab5c4f29-kube-api-access-x89qk\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.563531 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.563603 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-config-data\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.563631 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0af582ee-37f6-41fa-882e-a11eab5c4f29-logs\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.564645 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0af582ee-37f6-41fa-882e-a11eab5c4f29-logs\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.567797 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-config-data\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.567905 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.569477 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af582ee-37f6-41fa-882e-a11eab5c4f29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.580380 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x89qk\" (UniqueName: \"kubernetes.io/projected/0af582ee-37f6-41fa-882e-a11eab5c4f29-kube-api-access-x89qk\") pod \"nova-metadata-0\" (UID: \"0af582ee-37f6-41fa-882e-a11eab5c4f29\") " pod="openstack/nova-metadata-0" Nov 24 21:59:15 crc kubenswrapper[4767]: I1124 21:59:15.676833 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:59:16 crc kubenswrapper[4767]: I1124 21:59:16.132625 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:59:16 crc kubenswrapper[4767]: W1124 21:59:16.135594 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0af582ee_37f6_41fa_882e_a11eab5c4f29.slice/crio-d186dfa9cde572c0d61fb4423682adb26eb0f3680581d5fefdf1189149ebf4c8 WatchSource:0}: Error finding container d186dfa9cde572c0d61fb4423682adb26eb0f3680581d5fefdf1189149ebf4c8: Status 404 returned error can't find the container with id d186dfa9cde572c0d61fb4423682adb26eb0f3680581d5fefdf1189149ebf4c8 Nov 24 21:59:16 crc kubenswrapper[4767]: I1124 21:59:16.334556 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22ba9ab-6fb1-42bf-afe8-80090a611d52" path="/var/lib/kubelet/pods/f22ba9ab-6fb1-42bf-afe8-80090a611d52/volumes" Nov 24 21:59:16 crc kubenswrapper[4767]: I1124 21:59:16.336374 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0af582ee-37f6-41fa-882e-a11eab5c4f29","Type":"ContainerStarted","Data":"d186dfa9cde572c0d61fb4423682adb26eb0f3680581d5fefdf1189149ebf4c8"} Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.200635 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.294879 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzp95\" (UniqueName: \"kubernetes.io/projected/21796dd9-fa59-45d8-a276-b4e35f1fcaae-kube-api-access-zzp95\") pod \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.295046 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-combined-ca-bundle\") pod \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.295083 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-config-data\") pod \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\" (UID: \"21796dd9-fa59-45d8-a276-b4e35f1fcaae\") " Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.301351 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21796dd9-fa59-45d8-a276-b4e35f1fcaae-kube-api-access-zzp95" (OuterVolumeSpecName: "kube-api-access-zzp95") pod "21796dd9-fa59-45d8-a276-b4e35f1fcaae" (UID: "21796dd9-fa59-45d8-a276-b4e35f1fcaae"). InnerVolumeSpecName "kube-api-access-zzp95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.324606 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-config-data" (OuterVolumeSpecName: "config-data") pod "21796dd9-fa59-45d8-a276-b4e35f1fcaae" (UID: "21796dd9-fa59-45d8-a276-b4e35f1fcaae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.327731 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21796dd9-fa59-45d8-a276-b4e35f1fcaae" (UID: "21796dd9-fa59-45d8-a276-b4e35f1fcaae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.329965 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0af582ee-37f6-41fa-882e-a11eab5c4f29","Type":"ContainerStarted","Data":"3a80cf3dce6b63bbdbf14e0891874f171c005022df005f5f1063ffdc6b189517"} Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.330017 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0af582ee-37f6-41fa-882e-a11eab5c4f29","Type":"ContainerStarted","Data":"fa2e1b7619ea83a3c82c2eabb6b2a4567eac75ed453472d170494c9a5749572b"} Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.332213 4767 generic.go:334] "Generic (PLEG): container finished" podID="21796dd9-fa59-45d8-a276-b4e35f1fcaae" containerID="8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" exitCode=0 Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.332246 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.332301 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"21796dd9-fa59-45d8-a276-b4e35f1fcaae","Type":"ContainerDied","Data":"8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc"} Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.332351 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"21796dd9-fa59-45d8-a276-b4e35f1fcaae","Type":"ContainerDied","Data":"5ec7bd1c7c5fb45515794a7e71540361d1c440cbdeac7df19a25a6a43965efdf"} Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.332380 4767 scope.go:117] "RemoveContainer" containerID="8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.360324 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.360300961 podStartE2EDuration="2.360300961s" podCreationTimestamp="2025-11-24 21:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:59:17.348716193 +0000 UTC m=+1240.265699575" watchObservedRunningTime="2025-11-24 21:59:17.360300961 +0000 UTC m=+1240.277284333" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.382203 4767 scope.go:117] "RemoveContainer" containerID="8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" Nov 24 21:59:17 crc kubenswrapper[4767]: E1124 21:59:17.382627 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc\": container with ID starting with 8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc not found: ID does not exist" containerID="8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.382666 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc"} err="failed to get container status \"8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc\": rpc error: code = NotFound desc = could not find container \"8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc\": container with ID starting with 8efad4895bd0537a8690f9fc2a3e65835ec124e66ea178a82a9f10a00a3fa0bc not found: ID does not exist" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.391246 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.397889 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzp95\" (UniqueName: \"kubernetes.io/projected/21796dd9-fa59-45d8-a276-b4e35f1fcaae-kube-api-access-zzp95\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.397929 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.397939 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21796dd9-fa59-45d8-a276-b4e35f1fcaae-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.403465 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.414705 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:59:17 crc kubenswrapper[4767]: E1124 21:59:17.415081 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21796dd9-fa59-45d8-a276-b4e35f1fcaae" containerName="nova-scheduler-scheduler" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.415098 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="21796dd9-fa59-45d8-a276-b4e35f1fcaae" containerName="nova-scheduler-scheduler" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.415391 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="21796dd9-fa59-45d8-a276-b4e35f1fcaae" containerName="nova-scheduler-scheduler" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.416099 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.418953 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.425067 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.601328 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb63f2d-c413-4bb0-9c31-3c7871a80319-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.601393 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb63f2d-c413-4bb0-9c31-3c7871a80319-config-data\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.601903 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrtjr\" (UniqueName: \"kubernetes.io/projected/1eb63f2d-c413-4bb0-9c31-3c7871a80319-kube-api-access-xrtjr\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.703970 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrtjr\" (UniqueName: \"kubernetes.io/projected/1eb63f2d-c413-4bb0-9c31-3c7871a80319-kube-api-access-xrtjr\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.704366 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb63f2d-c413-4bb0-9c31-3c7871a80319-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.704402 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb63f2d-c413-4bb0-9c31-3c7871a80319-config-data\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.710217 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb63f2d-c413-4bb0-9c31-3c7871a80319-config-data\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.710612 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb63f2d-c413-4bb0-9c31-3c7871a80319-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.720942 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrtjr\" (UniqueName: \"kubernetes.io/projected/1eb63f2d-c413-4bb0-9c31-3c7871a80319-kube-api-access-xrtjr\") pod \"nova-scheduler-0\" (UID: \"1eb63f2d-c413-4bb0-9c31-3c7871a80319\") " pod="openstack/nova-scheduler-0" Nov 24 21:59:17 crc kubenswrapper[4767]: I1124 21:59:17.731146 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:59:18 crc kubenswrapper[4767]: I1124 21:59:18.201381 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:59:18 crc kubenswrapper[4767]: I1124 21:59:18.328721 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21796dd9-fa59-45d8-a276-b4e35f1fcaae" path="/var/lib/kubelet/pods/21796dd9-fa59-45d8-a276-b4e35f1fcaae/volumes" Nov 24 21:59:18 crc kubenswrapper[4767]: I1124 21:59:18.351497 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1eb63f2d-c413-4bb0-9c31-3c7871a80319","Type":"ContainerStarted","Data":"ec4259ea1c9fc987e5cf7f2dd3066eb1e9b3cf5e7bc109bcb755fc3f3daee466"} Nov 24 21:59:19 crc kubenswrapper[4767]: I1124 21:59:19.364820 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1eb63f2d-c413-4bb0-9c31-3c7871a80319","Type":"ContainerStarted","Data":"dc4e3442e26f69fad9f38d4c3ba6d59f158df4fa366ee7dfb5f08d5aaa101089"} Nov 24 21:59:19 crc kubenswrapper[4767]: I1124 21:59:19.392987 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.392957289 podStartE2EDuration="2.392957289s" podCreationTimestamp="2025-11-24 21:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:59:19.381666559 +0000 UTC m=+1242.298649961" watchObservedRunningTime="2025-11-24 21:59:19.392957289 +0000 UTC m=+1242.309940671" Nov 24 21:59:20 crc kubenswrapper[4767]: I1124 21:59:20.678060 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:59:20 crc kubenswrapper[4767]: I1124 21:59:20.678570 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:59:22 crc kubenswrapper[4767]: I1124 21:59:22.668987 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:59:22 crc kubenswrapper[4767]: I1124 21:59:22.669333 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:59:22 crc kubenswrapper[4767]: I1124 21:59:22.731674 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:59:23 crc kubenswrapper[4767]: I1124 21:59:23.683563 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="102497e1-cf13-4ed2-8976-ac528dbc6c82" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:59:23 crc kubenswrapper[4767]: I1124 21:59:23.683598 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="102497e1-cf13-4ed2-8976-ac528dbc6c82" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:59:25 crc kubenswrapper[4767]: I1124 21:59:25.677616 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:59:25 crc kubenswrapper[4767]: I1124 21:59:25.678130 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:59:26 crc kubenswrapper[4767]: I1124 21:59:26.689463 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0af582ee-37f6-41fa-882e-a11eab5c4f29" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:59:26 crc kubenswrapper[4767]: I1124 21:59:26.689486 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0af582ee-37f6-41fa-882e-a11eab5c4f29" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:59:27 crc kubenswrapper[4767]: I1124 21:59:27.732212 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:59:27 crc kubenswrapper[4767]: I1124 21:59:27.763381 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:59:28 crc kubenswrapper[4767]: I1124 21:59:28.521492 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:59:29 crc kubenswrapper[4767]: I1124 21:59:29.444884 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:59:32 crc kubenswrapper[4767]: I1124 21:59:32.676517 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:59:32 crc kubenswrapper[4767]: I1124 21:59:32.677515 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:59:32 crc kubenswrapper[4767]: I1124 21:59:32.679448 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:59:32 crc kubenswrapper[4767]: I1124 21:59:32.683399 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:59:33 crc kubenswrapper[4767]: I1124 21:59:33.561366 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:59:33 crc kubenswrapper[4767]: I1124 21:59:33.574062 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:59:35 crc kubenswrapper[4767]: I1124 21:59:35.481198 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:59:35 crc kubenswrapper[4767]: I1124 21:59:35.482197 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:59:35 crc kubenswrapper[4767]: I1124 21:59:35.683402 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:59:35 crc kubenswrapper[4767]: I1124 21:59:35.683871 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:59:35 crc kubenswrapper[4767]: I1124 21:59:35.689612 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:59:36 crc kubenswrapper[4767]: I1124 21:59:36.596531 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:59:44 crc kubenswrapper[4767]: I1124 21:59:44.592472 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:59:45 crc kubenswrapper[4767]: I1124 21:59:45.948935 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:59:49 crc kubenswrapper[4767]: I1124 21:59:49.147238 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" containerName="rabbitmq" containerID="cri-o://406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564" gracePeriod=604796 Nov 24 21:59:49 crc kubenswrapper[4767]: I1124 21:59:49.878990 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Nov 24 21:59:50 crc kubenswrapper[4767]: I1124 21:59:50.052214 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerName="rabbitmq" containerID="cri-o://536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b" gracePeriod=604796 Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.761659 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.798948 4767 generic.go:334] "Generic (PLEG): container finished" podID="30d319c1-5268-413c-a6db-9d376a2217c3" containerID="406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564" exitCode=0 Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.798992 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30d319c1-5268-413c-a6db-9d376a2217c3","Type":"ContainerDied","Data":"406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564"} Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.799018 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30d319c1-5268-413c-a6db-9d376a2217c3","Type":"ContainerDied","Data":"2c53137b58038ccef7db7ddc96408373ddde24b62180630098a4d43a1853501f"} Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.799035 4767 scope.go:117] "RemoveContainer" containerID="406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.799059 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.831823 4767 scope.go:117] "RemoveContainer" containerID="b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.882252 4767 scope.go:117] "RemoveContainer" containerID="406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885180 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-confd\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885267 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-plugins\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885319 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-config-data\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885346 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30d319c1-5268-413c-a6db-9d376a2217c3-pod-info\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885379 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-erlang-cookie\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885397 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885414 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-plugins-conf\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885510 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30d319c1-5268-413c-a6db-9d376a2217c3-erlang-cookie-secret\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885535 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-server-conf\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885554 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-tls\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.885657 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h55b9\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-kube-api-access-h55b9\") pod \"30d319c1-5268-413c-a6db-9d376a2217c3\" (UID: \"30d319c1-5268-413c-a6db-9d376a2217c3\") " Nov 24 21:59:55 crc kubenswrapper[4767]: E1124 21:59:55.886578 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564\": container with ID starting with 406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564 not found: ID does not exist" containerID="406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.886624 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564"} err="failed to get container status \"406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564\": rpc error: code = NotFound desc = could not find container \"406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564\": container with ID starting with 406558e0ce2599ea5ba285112328f5871649ee6cf4163ceac9cd26a61d351564 not found: ID does not exist" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.886656 4767 scope.go:117] "RemoveContainer" containerID="b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.888120 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: E1124 21:59:55.888222 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948\": container with ID starting with b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948 not found: ID does not exist" containerID="b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.888257 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948"} err="failed to get container status \"b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948\": rpc error: code = NotFound desc = could not find container \"b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948\": container with ID starting with b7d70d61b8a6d05ff0358fcc918c04e12a75f5db49942630664c51e6bfc49948 not found: ID does not exist" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.888668 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.897409 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/30d319c1-5268-413c-a6db-9d376a2217c3-pod-info" (OuterVolumeSpecName: "pod-info") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.897646 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.898787 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-kube-api-access-h55b9" (OuterVolumeSpecName: "kube-api-access-h55b9") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "kube-api-access-h55b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.898910 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.901722 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d319c1-5268-413c-a6db-9d376a2217c3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.902672 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.925253 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-config-data" (OuterVolumeSpecName: "config-data") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.956483 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-server-conf" (OuterVolumeSpecName: "server-conf") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989248 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h55b9\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-kube-api-access-h55b9\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989290 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989301 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989310 4767 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30d319c1-5268-413c-a6db-9d376a2217c3-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989318 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989340 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989349 4767 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989357 4767 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30d319c1-5268-413c-a6db-9d376a2217c3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989365 4767 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30d319c1-5268-413c-a6db-9d376a2217c3-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:55 crc kubenswrapper[4767]: I1124 21:59:55.989373 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.009554 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "30d319c1-5268-413c-a6db-9d376a2217c3" (UID: "30d319c1-5268-413c-a6db-9d376a2217c3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.010019 4767 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.090831 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30d319c1-5268-413c-a6db-9d376a2217c3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.090857 4767 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.128896 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.138178 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.167491 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:59:56 crc kubenswrapper[4767]: E1124 21:59:56.168038 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" containerName="setup-container" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.168061 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" containerName="setup-container" Nov 24 21:59:56 crc kubenswrapper[4767]: E1124 21:59:56.168082 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" containerName="rabbitmq" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.168090 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" containerName="rabbitmq" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.168362 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" containerName="rabbitmq" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.169708 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.173305 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.173831 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.173996 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.174146 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.176744 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vm78g" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.177365 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.179551 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.183984 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.293565 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.293612 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5nb\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-kube-api-access-5d5nb\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.293823 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.293906 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.293947 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.294093 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.294121 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-config-data\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.294228 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.294256 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.294312 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.294400 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.327180 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d319c1-5268-413c-a6db-9d376a2217c3" path="/var/lib/kubelet/pods/30d319c1-5268-413c-a6db-9d376a2217c3/volumes" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.395947 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396008 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396042 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396079 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396098 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-config-data\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396155 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396174 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396191 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396233 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396267 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.396298 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d5nb\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-kube-api-access-5d5nb\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.397654 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.398057 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.398230 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.399178 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.399718 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-config-data\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.399823 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.404190 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.406310 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.409731 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.412056 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.416728 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d5nb\" (UniqueName: \"kubernetes.io/projected/6e04e8f5-1d91-474f-b67b-d8fa24e00b90-kube-api-access-5d5nb\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.429015 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6e04e8f5-1d91-474f-b67b-d8fa24e00b90\") " pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.492454 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.636728 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703561 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-plugins\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703620 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-plugins-conf\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703652 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-pod-info\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703677 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-erlang-cookie\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703709 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-erlang-cookie-secret\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703806 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn4x2\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-kube-api-access-dn4x2\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703829 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-confd\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703856 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.703920 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-server-conf\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.712003 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-kube-api-access-dn4x2" (OuterVolumeSpecName: "kube-api-access-dn4x2") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "kube-api-access-dn4x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.714318 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-tls\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.714415 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-config-data\") pod \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\" (UID: \"5c433e97-140e-43fe-aa7b-1bd14d9e78b9\") " Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.716183 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.717359 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.717523 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.717644 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-pod-info" (OuterVolumeSpecName: "pod-info") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.718094 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.718584 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn4x2\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-kube-api-access-dn4x2\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.719762 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.721915 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:56 crc kubenswrapper[4767]: I1124 21:59:56.750748 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-config-data" (OuterVolumeSpecName: "config-data") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.781176 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-server-conf" (OuterVolumeSpecName: "server-conf") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.823236 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.823329 4767 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.824949 4767 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.824976 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.824999 4767 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.825037 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.825058 4767 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.825070 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.825080 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.872209 4767 generic.go:334] "Generic (PLEG): container finished" podID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerID="536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b" exitCode=0 Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.872300 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5c433e97-140e-43fe-aa7b-1bd14d9e78b9","Type":"ContainerDied","Data":"536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b"} Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.872331 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5c433e97-140e-43fe-aa7b-1bd14d9e78b9","Type":"ContainerDied","Data":"77b1ca3b5a49c3d4c8e416f578aeac14ca4839406d1ca3a3b811652114234ab3"} Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.872353 4767 scope.go:117] "RemoveContainer" containerID="536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.872491 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.873898 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5c433e97-140e-43fe-aa7b-1bd14d9e78b9" (UID: "5c433e97-140e-43fe-aa7b-1bd14d9e78b9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.882127 4767 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.897755 4767 scope.go:117] "RemoveContainer" containerID="f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.925350 4767 scope.go:117] "RemoveContainer" containerID="536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b" Nov 24 21:59:57 crc kubenswrapper[4767]: E1124 21:59:56.926410 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b\": container with ID starting with 536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b not found: ID does not exist" containerID="536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.926471 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b"} err="failed to get container status \"536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b\": rpc error: code = NotFound desc = could not find container \"536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b\": container with ID starting with 536c39ba023e6a1d710e3cb08f580281eab7ca64525ab0ed6d87bda1fbcbe81b not found: ID does not exist" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.926505 4767 scope.go:117] "RemoveContainer" containerID="f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21" Nov 24 21:59:57 crc kubenswrapper[4767]: E1124 21:59:56.927236 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21\": container with ID starting with f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21 not found: ID does not exist" containerID="f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.927278 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21"} err="failed to get container status \"f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21\": rpc error: code = NotFound desc = could not find container \"f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21\": container with ID starting with f5aa46d30038140911faf8088e250fecbf9b1d53f953ccea13d691e7a8ff1a21 not found: ID does not exist" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.928303 4767 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5c433e97-140e-43fe-aa7b-1bd14d9e78b9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.928321 4767 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:57 crc kubenswrapper[4767]: W1124 21:59:56.975159 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e04e8f5_1d91_474f_b67b_d8fa24e00b90.slice/crio-2bae22fd5d8854763fef600365f3a420280dbc8b7b5cc1b5201eb4f72365d63f WatchSource:0}: Error finding container 2bae22fd5d8854763fef600365f3a420280dbc8b7b5cc1b5201eb4f72365d63f: Status 404 returned error can't find the container with id 2bae22fd5d8854763fef600365f3a420280dbc8b7b5cc1b5201eb4f72365d63f Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:56.982968 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.218657 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.227590 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.240918 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:59:57 crc kubenswrapper[4767]: E1124 21:59:57.241544 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerName="rabbitmq" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.241565 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerName="rabbitmq" Nov 24 21:59:57 crc kubenswrapper[4767]: E1124 21:59:57.241619 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerName="setup-container" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.241630 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerName="setup-container" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.241899 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" containerName="rabbitmq" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.243364 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.245214 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.246480 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.246617 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.247282 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.247396 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.247457 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sr5cr" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.247583 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.265126 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346000 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346137 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmvb\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-kube-api-access-kwmvb\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346169 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346228 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346366 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346717 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346777 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346815 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346867 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346899 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.346932 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449286 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449331 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmvb\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-kube-api-access-kwmvb\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449360 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449409 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449455 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449498 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449544 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449593 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449629 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449658 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.449727 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.450866 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.451291 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.451747 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.451846 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.453802 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.453831 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.458850 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.459780 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.459886 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.471451 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.472014 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmvb\" (UniqueName: \"kubernetes.io/projected/54f86c38-24f7-427b-9b8c-4f4505f7fa1d-kube-api-access-kwmvb\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.503856 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54f86c38-24f7-427b-9b8c-4f4505f7fa1d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.575654 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:59:57 crc kubenswrapper[4767]: I1124 21:59:57.889887 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e04e8f5-1d91-474f-b67b-d8fa24e00b90","Type":"ContainerStarted","Data":"2bae22fd5d8854763fef600365f3a420280dbc8b7b5cc1b5201eb4f72365d63f"} Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.017712 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:59:58 crc kubenswrapper[4767]: W1124 21:59:58.025035 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f86c38_24f7_427b_9b8c_4f4505f7fa1d.slice/crio-97d64cfad314d40ebe4268112e76944658facc928a35437e18ce94f5c59953e3 WatchSource:0}: Error finding container 97d64cfad314d40ebe4268112e76944658facc928a35437e18ce94f5c59953e3: Status 404 returned error can't find the container with id 97d64cfad314d40ebe4268112e76944658facc928a35437e18ce94f5c59953e3 Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.328513 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c433e97-140e-43fe-aa7b-1bd14d9e78b9" path="/var/lib/kubelet/pods/5c433e97-140e-43fe-aa7b-1bd14d9e78b9/volumes" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.781196 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4c225"] Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.783175 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.785329 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.814837 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4c225"] Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.879874 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.880005 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmgrz\" (UniqueName: \"kubernetes.io/projected/26a0604f-ee55-4bfd-9b22-728392d9d854-kube-api-access-pmgrz\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.880055 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.880094 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.880113 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.880322 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-config\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.880555 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.900512 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54f86c38-24f7-427b-9b8c-4f4505f7fa1d","Type":"ContainerStarted","Data":"97d64cfad314d40ebe4268112e76944658facc928a35437e18ce94f5c59953e3"} Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.902172 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e04e8f5-1d91-474f-b67b-d8fa24e00b90","Type":"ContainerStarted","Data":"b5f04af9cb9fde5c4f9caf8a125e64f1251df618bbc18d042b6045e5a7dd7929"} Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.982576 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.982622 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.982664 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-config\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.982747 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.982785 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.982860 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmgrz\" (UniqueName: \"kubernetes.io/projected/26a0604f-ee55-4bfd-9b22-728392d9d854-kube-api-access-pmgrz\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.982905 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.983610 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.984162 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.984611 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.985017 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-config\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.985204 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:58 crc kubenswrapper[4767]: I1124 21:59:58.985474 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:59 crc kubenswrapper[4767]: I1124 21:59:59.009602 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmgrz\" (UniqueName: \"kubernetes.io/projected/26a0604f-ee55-4bfd-9b22-728392d9d854-kube-api-access-pmgrz\") pod \"dnsmasq-dns-79bd4cc8c9-4c225\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:59 crc kubenswrapper[4767]: I1124 21:59:59.110023 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 21:59:59 crc kubenswrapper[4767]: I1124 21:59:59.559818 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4c225"] Nov 24 21:59:59 crc kubenswrapper[4767]: W1124 21:59:59.801664 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26a0604f_ee55_4bfd_9b22_728392d9d854.slice/crio-fee458228a6f541e452634d080a51eb08904987af844dee43f863259374d82a4 WatchSource:0}: Error finding container fee458228a6f541e452634d080a51eb08904987af844dee43f863259374d82a4: Status 404 returned error can't find the container with id fee458228a6f541e452634d080a51eb08904987af844dee43f863259374d82a4 Nov 24 21:59:59 crc kubenswrapper[4767]: I1124 21:59:59.916457 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" event={"ID":"26a0604f-ee55-4bfd-9b22-728392d9d854","Type":"ContainerStarted","Data":"fee458228a6f541e452634d080a51eb08904987af844dee43f863259374d82a4"} Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.129381 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78"] Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.130921 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.133258 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.135059 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.144406 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78"] Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.209903 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-config-volume\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.209957 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-secret-volume\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.210157 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgbd9\" (UniqueName: \"kubernetes.io/projected/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-kube-api-access-dgbd9\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.312528 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgbd9\" (UniqueName: \"kubernetes.io/projected/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-kube-api-access-dgbd9\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.312663 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-config-volume\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.312687 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-secret-volume\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.313574 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-config-volume\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.317692 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-secret-volume\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.329751 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgbd9\" (UniqueName: \"kubernetes.io/projected/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-kube-api-access-dgbd9\") pod \"collect-profiles-29400360-nhn78\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.478176 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:00 crc kubenswrapper[4767]: W1124 22:00:00.925878 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e6da3_cf34_4cd1_ab99_c5d4eb025c96.slice/crio-79227605842c6e4f868b25e7b191bf827038573deff682be0f9ff6b4fda9dd49 WatchSource:0}: Error finding container 79227605842c6e4f868b25e7b191bf827038573deff682be0f9ff6b4fda9dd49: Status 404 returned error can't find the container with id 79227605842c6e4f868b25e7b191bf827038573deff682be0f9ff6b4fda9dd49 Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.927593 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78"] Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.930527 4767 generic.go:334] "Generic (PLEG): container finished" podID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerID="c6077eefe513932d582450265152f3c67081179ac7058f906df212ff71de5323" exitCode=0 Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.930580 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" event={"ID":"26a0604f-ee55-4bfd-9b22-728392d9d854","Type":"ContainerDied","Data":"c6077eefe513932d582450265152f3c67081179ac7058f906df212ff71de5323"} Nov 24 22:00:00 crc kubenswrapper[4767]: I1124 22:00:00.932712 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54f86c38-24f7-427b-9b8c-4f4505f7fa1d","Type":"ContainerStarted","Data":"568bf94f48a3fdcc73874f9e051c11270b46db9436aae8c8c02b6f58c475c8e7"} Nov 24 22:00:01 crc kubenswrapper[4767]: I1124 22:00:01.949691 4767 generic.go:334] "Generic (PLEG): container finished" podID="bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" containerID="6f9cc36295d186a8cb966db9448f2738ec51701eb1944b987ed77f8b282cd72c" exitCode=0 Nov 24 22:00:01 crc kubenswrapper[4767]: I1124 22:00:01.949749 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" event={"ID":"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96","Type":"ContainerDied","Data":"6f9cc36295d186a8cb966db9448f2738ec51701eb1944b987ed77f8b282cd72c"} Nov 24 22:00:01 crc kubenswrapper[4767]: I1124 22:00:01.950080 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" event={"ID":"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96","Type":"ContainerStarted","Data":"79227605842c6e4f868b25e7b191bf827038573deff682be0f9ff6b4fda9dd49"} Nov 24 22:00:01 crc kubenswrapper[4767]: I1124 22:00:01.952420 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" event={"ID":"26a0604f-ee55-4bfd-9b22-728392d9d854","Type":"ContainerStarted","Data":"8e867542fe555a1f1945719bc235e8831bf3a6cf4cdd520b509a373473312910"} Nov 24 22:00:01 crc kubenswrapper[4767]: I1124 22:00:01.952520 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 22:00:01 crc kubenswrapper[4767]: I1124 22:00:01.988580 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" podStartSLOduration=3.988564697 podStartE2EDuration="3.988564697s" podCreationTimestamp="2025-11-24 21:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:00:01.982619909 +0000 UTC m=+1284.899603291" watchObservedRunningTime="2025-11-24 22:00:01.988564697 +0000 UTC m=+1284.905548069" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.341357 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.379987 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-secret-volume\") pod \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.380238 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgbd9\" (UniqueName: \"kubernetes.io/projected/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-kube-api-access-dgbd9\") pod \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.380407 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-config-volume\") pod \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\" (UID: \"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96\") " Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.381063 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-config-volume" (OuterVolumeSpecName: "config-volume") pod "bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" (UID: "bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.386747 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" (UID: "bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.387023 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-kube-api-access-dgbd9" (OuterVolumeSpecName: "kube-api-access-dgbd9") pod "bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" (UID: "bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96"). InnerVolumeSpecName "kube-api-access-dgbd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.483166 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.483221 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgbd9\" (UniqueName: \"kubernetes.io/projected/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-kube-api-access-dgbd9\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.483231 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.979724 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" event={"ID":"bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96","Type":"ContainerDied","Data":"79227605842c6e4f868b25e7b191bf827038573deff682be0f9ff6b4fda9dd49"} Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.979788 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79227605842c6e4f868b25e7b191bf827038573deff682be0f9ff6b4fda9dd49" Nov 24 22:00:03 crc kubenswrapper[4767]: I1124 22:00:03.979852 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78" Nov 24 22:00:05 crc kubenswrapper[4767]: I1124 22:00:05.482192 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:00:05 crc kubenswrapper[4767]: I1124 22:00:05.482758 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.111713 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.185628 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xhjdp"] Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.185844 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" podUID="d66e31b6-987b-4d4f-a897-14bce551de92" containerName="dnsmasq-dns" containerID="cri-o://25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332" gracePeriod=10 Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.318572 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d68fbfdc-ssw6j"] Nov 24 22:00:09 crc kubenswrapper[4767]: E1124 22:00:09.319189 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" containerName="collect-profiles" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.319730 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" containerName="collect-profiles" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.319963 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" containerName="collect-profiles" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.321733 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.324098 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d68fbfdc-ssw6j"] Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.423476 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.423917 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-dns-svc\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.423996 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-dns-swift-storage-0\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.424107 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmkz\" (UniqueName: \"kubernetes.io/projected/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-kube-api-access-gdmkz\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.424296 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.425370 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.425464 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-config\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.527345 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.527461 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-config\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.527496 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.527562 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-dns-svc\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.527591 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-dns-swift-storage-0\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.527694 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdmkz\" (UniqueName: \"kubernetes.io/projected/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-kube-api-access-gdmkz\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.528103 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.528346 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.528359 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-config\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.528514 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-dns-swift-storage-0\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.528754 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.528994 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-dns-svc\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.529664 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.550807 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdmkz\" (UniqueName: \"kubernetes.io/projected/7c337669-c5dd-4162-a7cb-a38a0cd86dbe-kube-api-access-gdmkz\") pod \"dnsmasq-dns-d68fbfdc-ssw6j\" (UID: \"7c337669-c5dd-4162-a7cb-a38a0cd86dbe\") " pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.678703 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.678951 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.731881 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/d66e31b6-987b-4d4f-a897-14bce551de92-kube-api-access-chrbt\") pod \"d66e31b6-987b-4d4f-a897-14bce551de92\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.731983 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-nb\") pod \"d66e31b6-987b-4d4f-a897-14bce551de92\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.732030 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-swift-storage-0\") pod \"d66e31b6-987b-4d4f-a897-14bce551de92\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.732072 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-svc\") pod \"d66e31b6-987b-4d4f-a897-14bce551de92\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.732091 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-sb\") pod \"d66e31b6-987b-4d4f-a897-14bce551de92\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.732246 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-config\") pod \"d66e31b6-987b-4d4f-a897-14bce551de92\" (UID: \"d66e31b6-987b-4d4f-a897-14bce551de92\") " Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.738419 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66e31b6-987b-4d4f-a897-14bce551de92-kube-api-access-chrbt" (OuterVolumeSpecName: "kube-api-access-chrbt") pod "d66e31b6-987b-4d4f-a897-14bce551de92" (UID: "d66e31b6-987b-4d4f-a897-14bce551de92"). InnerVolumeSpecName "kube-api-access-chrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.788812 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-config" (OuterVolumeSpecName: "config") pod "d66e31b6-987b-4d4f-a897-14bce551de92" (UID: "d66e31b6-987b-4d4f-a897-14bce551de92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.801830 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d66e31b6-987b-4d4f-a897-14bce551de92" (UID: "d66e31b6-987b-4d4f-a897-14bce551de92"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.803109 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d66e31b6-987b-4d4f-a897-14bce551de92" (UID: "d66e31b6-987b-4d4f-a897-14bce551de92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.828111 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d66e31b6-987b-4d4f-a897-14bce551de92" (UID: "d66e31b6-987b-4d4f-a897-14bce551de92"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.828962 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d66e31b6-987b-4d4f-a897-14bce551de92" (UID: "d66e31b6-987b-4d4f-a897-14bce551de92"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.835029 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-config\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.835059 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/d66e31b6-987b-4d4f-a897-14bce551de92-kube-api-access-chrbt\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.835072 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.835086 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.835100 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:09 crc kubenswrapper[4767]: I1124 22:00:09.835113 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66e31b6-987b-4d4f-a897-14bce551de92-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.051618 4767 generic.go:334] "Generic (PLEG): container finished" podID="d66e31b6-987b-4d4f-a897-14bce551de92" containerID="25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332" exitCode=0 Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.051659 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" event={"ID":"d66e31b6-987b-4d4f-a897-14bce551de92","Type":"ContainerDied","Data":"25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332"} Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.051686 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" event={"ID":"d66e31b6-987b-4d4f-a897-14bce551de92","Type":"ContainerDied","Data":"92f7ee6a659791a2c43886f743973df1b40a79e4dfd9afeff7b03edd3aaeedcc"} Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.051703 4767 scope.go:117] "RemoveContainer" containerID="25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.051732 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xhjdp" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.081667 4767 scope.go:117] "RemoveContainer" containerID="78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.083770 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xhjdp"] Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.093248 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xhjdp"] Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.105430 4767 scope.go:117] "RemoveContainer" containerID="25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332" Nov 24 22:00:10 crc kubenswrapper[4767]: E1124 22:00:10.106031 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332\": container with ID starting with 25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332 not found: ID does not exist" containerID="25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.106075 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332"} err="failed to get container status \"25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332\": rpc error: code = NotFound desc = could not find container \"25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332\": container with ID starting with 25cd0e98c20c28484a26e72cf7aaa75b4d3a26a9b2f2000a99e43013e4c0a332 not found: ID does not exist" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.106106 4767 scope.go:117] "RemoveContainer" containerID="78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e" Nov 24 22:00:10 crc kubenswrapper[4767]: E1124 22:00:10.106493 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e\": container with ID starting with 78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e not found: ID does not exist" containerID="78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.106518 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e"} err="failed to get container status \"78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e\": rpc error: code = NotFound desc = could not find container \"78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e\": container with ID starting with 78926c37b6cba3ee0dd154164df274a22dfe0dce9e6b77e3411b33f6ccb1566e not found: ID does not exist" Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.139904 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d68fbfdc-ssw6j"] Nov 24 22:00:10 crc kubenswrapper[4767]: W1124 22:00:10.144701 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c337669_c5dd_4162_a7cb_a38a0cd86dbe.slice/crio-4fc687bd200f11d4e16fbe81fe675ef64a00da50a3cc0f52d3d03b222d009485 WatchSource:0}: Error finding container 4fc687bd200f11d4e16fbe81fe675ef64a00da50a3cc0f52d3d03b222d009485: Status 404 returned error can't find the container with id 4fc687bd200f11d4e16fbe81fe675ef64a00da50a3cc0f52d3d03b222d009485 Nov 24 22:00:10 crc kubenswrapper[4767]: I1124 22:00:10.333391 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66e31b6-987b-4d4f-a897-14bce551de92" path="/var/lib/kubelet/pods/d66e31b6-987b-4d4f-a897-14bce551de92/volumes" Nov 24 22:00:11 crc kubenswrapper[4767]: I1124 22:00:11.065572 4767 generic.go:334] "Generic (PLEG): container finished" podID="7c337669-c5dd-4162-a7cb-a38a0cd86dbe" containerID="5b50004e3e96c76f7331fa15938aec7f7e48c92352277a2325d65b3a2b0d91b9" exitCode=0 Nov 24 22:00:11 crc kubenswrapper[4767]: I1124 22:00:11.065649 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" event={"ID":"7c337669-c5dd-4162-a7cb-a38a0cd86dbe","Type":"ContainerDied","Data":"5b50004e3e96c76f7331fa15938aec7f7e48c92352277a2325d65b3a2b0d91b9"} Nov 24 22:00:11 crc kubenswrapper[4767]: I1124 22:00:11.065684 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" event={"ID":"7c337669-c5dd-4162-a7cb-a38a0cd86dbe","Type":"ContainerStarted","Data":"4fc687bd200f11d4e16fbe81fe675ef64a00da50a3cc0f52d3d03b222d009485"} Nov 24 22:00:12 crc kubenswrapper[4767]: I1124 22:00:12.082521 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" event={"ID":"7c337669-c5dd-4162-a7cb-a38a0cd86dbe","Type":"ContainerStarted","Data":"e33a15751a64c824fc73094870e52a2ff38c3fee2b273432c2609106b26795fb"} Nov 24 22:00:12 crc kubenswrapper[4767]: I1124 22:00:12.083079 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:12 crc kubenswrapper[4767]: I1124 22:00:12.116492 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" podStartSLOduration=3.116471664 podStartE2EDuration="3.116471664s" podCreationTimestamp="2025-11-24 22:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:00:12.10575741 +0000 UTC m=+1295.022740792" watchObservedRunningTime="2025-11-24 22:00:12.116471664 +0000 UTC m=+1295.033455046" Nov 24 22:00:19 crc kubenswrapper[4767]: I1124 22:00:19.680454 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d68fbfdc-ssw6j" Nov 24 22:00:19 crc kubenswrapper[4767]: I1124 22:00:19.750089 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4c225"] Nov 24 22:00:19 crc kubenswrapper[4767]: I1124 22:00:19.750427 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" podUID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerName="dnsmasq-dns" containerID="cri-o://8e867542fe555a1f1945719bc235e8831bf3a6cf4cdd520b509a373473312910" gracePeriod=10 Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.178146 4767 generic.go:334] "Generic (PLEG): container finished" podID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerID="8e867542fe555a1f1945719bc235e8831bf3a6cf4cdd520b509a373473312910" exitCode=0 Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.178190 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" event={"ID":"26a0604f-ee55-4bfd-9b22-728392d9d854","Type":"ContainerDied","Data":"8e867542fe555a1f1945719bc235e8831bf3a6cf4cdd520b509a373473312910"} Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.178216 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" event={"ID":"26a0604f-ee55-4bfd-9b22-728392d9d854","Type":"ContainerDied","Data":"fee458228a6f541e452634d080a51eb08904987af844dee43f863259374d82a4"} Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.178227 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fee458228a6f541e452634d080a51eb08904987af844dee43f863259374d82a4" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.232166 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.387609 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-svc\") pod \"26a0604f-ee55-4bfd-9b22-728392d9d854\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.387689 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-nb\") pod \"26a0604f-ee55-4bfd-9b22-728392d9d854\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.387802 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-config\") pod \"26a0604f-ee55-4bfd-9b22-728392d9d854\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.387848 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmgrz\" (UniqueName: \"kubernetes.io/projected/26a0604f-ee55-4bfd-9b22-728392d9d854-kube-api-access-pmgrz\") pod \"26a0604f-ee55-4bfd-9b22-728392d9d854\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.387873 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-swift-storage-0\") pod \"26a0604f-ee55-4bfd-9b22-728392d9d854\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.387891 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-openstack-edpm-ipam\") pod \"26a0604f-ee55-4bfd-9b22-728392d9d854\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.387988 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-sb\") pod \"26a0604f-ee55-4bfd-9b22-728392d9d854\" (UID: \"26a0604f-ee55-4bfd-9b22-728392d9d854\") " Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.395197 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a0604f-ee55-4bfd-9b22-728392d9d854-kube-api-access-pmgrz" (OuterVolumeSpecName: "kube-api-access-pmgrz") pod "26a0604f-ee55-4bfd-9b22-728392d9d854" (UID: "26a0604f-ee55-4bfd-9b22-728392d9d854"). InnerVolumeSpecName "kube-api-access-pmgrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.442743 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26a0604f-ee55-4bfd-9b22-728392d9d854" (UID: "26a0604f-ee55-4bfd-9b22-728392d9d854"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.446650 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "26a0604f-ee55-4bfd-9b22-728392d9d854" (UID: "26a0604f-ee55-4bfd-9b22-728392d9d854"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.451180 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26a0604f-ee55-4bfd-9b22-728392d9d854" (UID: "26a0604f-ee55-4bfd-9b22-728392d9d854"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.456297 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-config" (OuterVolumeSpecName: "config") pod "26a0604f-ee55-4bfd-9b22-728392d9d854" (UID: "26a0604f-ee55-4bfd-9b22-728392d9d854"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.458817 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26a0604f-ee55-4bfd-9b22-728392d9d854" (UID: "26a0604f-ee55-4bfd-9b22-728392d9d854"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.463651 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "26a0604f-ee55-4bfd-9b22-728392d9d854" (UID: "26a0604f-ee55-4bfd-9b22-728392d9d854"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.490421 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.490461 4767 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.490471 4767 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.490483 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-config\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.490495 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmgrz\" (UniqueName: \"kubernetes.io/projected/26a0604f-ee55-4bfd-9b22-728392d9d854-kube-api-access-pmgrz\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.490509 4767 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:20 crc kubenswrapper[4767]: I1124 22:00:20.490520 4767 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26a0604f-ee55-4bfd-9b22-728392d9d854-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:21 crc kubenswrapper[4767]: I1124 22:00:21.187928 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-4c225" Nov 24 22:00:21 crc kubenswrapper[4767]: I1124 22:00:21.233164 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4c225"] Nov 24 22:00:21 crc kubenswrapper[4767]: I1124 22:00:21.247926 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-4c225"] Nov 24 22:00:22 crc kubenswrapper[4767]: I1124 22:00:22.324229 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a0604f-ee55-4bfd-9b22-728392d9d854" path="/var/lib/kubelet/pods/26a0604f-ee55-4bfd-9b22-728392d9d854/volumes" Nov 24 22:00:31 crc kubenswrapper[4767]: I1124 22:00:31.308131 4767 generic.go:334] "Generic (PLEG): container finished" podID="6e04e8f5-1d91-474f-b67b-d8fa24e00b90" containerID="b5f04af9cb9fde5c4f9caf8a125e64f1251df618bbc18d042b6045e5a7dd7929" exitCode=0 Nov 24 22:00:31 crc kubenswrapper[4767]: I1124 22:00:31.308382 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e04e8f5-1d91-474f-b67b-d8fa24e00b90","Type":"ContainerDied","Data":"b5f04af9cb9fde5c4f9caf8a125e64f1251df618bbc18d042b6045e5a7dd7929"} Nov 24 22:00:32 crc kubenswrapper[4767]: I1124 22:00:32.324653 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e04e8f5-1d91-474f-b67b-d8fa24e00b90","Type":"ContainerStarted","Data":"d930eddc6f2fc858741dd263bc52737f4d2752903ea649e43a84f7e0428ebecb"} Nov 24 22:00:32 crc kubenswrapper[4767]: I1124 22:00:32.325808 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 22:00:32 crc kubenswrapper[4767]: I1124 22:00:32.325907 4767 generic.go:334] "Generic (PLEG): container finished" podID="54f86c38-24f7-427b-9b8c-4f4505f7fa1d" containerID="568bf94f48a3fdcc73874f9e051c11270b46db9436aae8c8c02b6f58c475c8e7" exitCode=0 Nov 24 22:00:32 crc kubenswrapper[4767]: I1124 22:00:32.325979 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54f86c38-24f7-427b-9b8c-4f4505f7fa1d","Type":"ContainerDied","Data":"568bf94f48a3fdcc73874f9e051c11270b46db9436aae8c8c02b6f58c475c8e7"} Nov 24 22:00:32 crc kubenswrapper[4767]: I1124 22:00:32.359392 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.359368744 podStartE2EDuration="36.359368744s" podCreationTimestamp="2025-11-24 21:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:00:32.354676271 +0000 UTC m=+1315.271659643" watchObservedRunningTime="2025-11-24 22:00:32.359368744 +0000 UTC m=+1315.276352116" Nov 24 22:00:33 crc kubenswrapper[4767]: I1124 22:00:33.338842 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54f86c38-24f7-427b-9b8c-4f4505f7fa1d","Type":"ContainerStarted","Data":"21df1a8351d34575ab5d13ebf1b10a5a71cce21c42ae267049458c82c8ddb585"} Nov 24 22:00:33 crc kubenswrapper[4767]: I1124 22:00:33.339718 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 22:00:33 crc kubenswrapper[4767]: I1124 22:00:33.359509 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.359489928 podStartE2EDuration="36.359489928s" podCreationTimestamp="2025-11-24 21:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:00:33.358684755 +0000 UTC m=+1316.275668127" watchObservedRunningTime="2025-11-24 22:00:33.359489928 +0000 UTC m=+1316.276473300" Nov 24 22:00:35 crc kubenswrapper[4767]: I1124 22:00:35.481045 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:00:35 crc kubenswrapper[4767]: I1124 22:00:35.481356 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:00:35 crc kubenswrapper[4767]: I1124 22:00:35.481398 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:00:35 crc kubenswrapper[4767]: I1124 22:00:35.482092 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb71cfb4f27344cb7cceaf9ac7651774b144254e6ab13360f5b5c998afd38e04"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:00:35 crc kubenswrapper[4767]: I1124 22:00:35.482134 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://cb71cfb4f27344cb7cceaf9ac7651774b144254e6ab13360f5b5c998afd38e04" gracePeriod=600 Nov 24 22:00:36 crc kubenswrapper[4767]: I1124 22:00:36.367973 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="cb71cfb4f27344cb7cceaf9ac7651774b144254e6ab13360f5b5c998afd38e04" exitCode=0 Nov 24 22:00:36 crc kubenswrapper[4767]: I1124 22:00:36.368051 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"cb71cfb4f27344cb7cceaf9ac7651774b144254e6ab13360f5b5c998afd38e04"} Nov 24 22:00:36 crc kubenswrapper[4767]: I1124 22:00:36.368689 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c"} Nov 24 22:00:36 crc kubenswrapper[4767]: I1124 22:00:36.368720 4767 scope.go:117] "RemoveContainer" containerID="b2a57db0a7357f691890d9ae543dd8c8e63ac1b14aa419c6ceaa2fe9ae17ceb2" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.876754 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd"] Nov 24 22:00:37 crc kubenswrapper[4767]: E1124 22:00:37.877540 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66e31b6-987b-4d4f-a897-14bce551de92" containerName="dnsmasq-dns" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.877555 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66e31b6-987b-4d4f-a897-14bce551de92" containerName="dnsmasq-dns" Nov 24 22:00:37 crc kubenswrapper[4767]: E1124 22:00:37.877575 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerName="dnsmasq-dns" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.877583 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerName="dnsmasq-dns" Nov 24 22:00:37 crc kubenswrapper[4767]: E1124 22:00:37.877599 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66e31b6-987b-4d4f-a897-14bce551de92" containerName="init" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.877607 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66e31b6-987b-4d4f-a897-14bce551de92" containerName="init" Nov 24 22:00:37 crc kubenswrapper[4767]: E1124 22:00:37.877625 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerName="init" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.877632 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerName="init" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.877885 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66e31b6-987b-4d4f-a897-14bce551de92" containerName="dnsmasq-dns" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.877907 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a0604f-ee55-4bfd-9b22-728392d9d854" containerName="dnsmasq-dns" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.878732 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.882788 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.883073 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.883452 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.884321 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.898207 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd"] Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.944982 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.945068 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzwkv\" (UniqueName: \"kubernetes.io/projected/256a937c-fb13-42bf-b69f-140b9d8bad1d-kube-api-access-xzwkv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.945098 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:37 crc kubenswrapper[4767]: I1124 22:00:37.945383 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.047081 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzwkv\" (UniqueName: \"kubernetes.io/projected/256a937c-fb13-42bf-b69f-140b9d8bad1d-kube-api-access-xzwkv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.047122 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.047190 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.047329 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.054958 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.055430 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.063649 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.076668 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzwkv\" (UniqueName: \"kubernetes.io/projected/256a937c-fb13-42bf-b69f-140b9d8bad1d-kube-api-access-xzwkv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.204732 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:00:38 crc kubenswrapper[4767]: I1124 22:00:38.768238 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd"] Nov 24 22:00:39 crc kubenswrapper[4767]: I1124 22:00:39.417828 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" event={"ID":"256a937c-fb13-42bf-b69f-140b9d8bad1d","Type":"ContainerStarted","Data":"cf5fd7897ce5e2a29715f6b960a091278605df7ecc86bc49254df5820deddea1"} Nov 24 22:00:46 crc kubenswrapper[4767]: I1124 22:00:46.497463 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 22:00:47 crc kubenswrapper[4767]: I1124 22:00:47.199787 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:00:47 crc kubenswrapper[4767]: I1124 22:00:47.511109 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" event={"ID":"256a937c-fb13-42bf-b69f-140b9d8bad1d","Type":"ContainerStarted","Data":"3891721ad09a708b98964ce8348bb56c51ce7bab780862ab4dbbfd6644ffb435"} Nov 24 22:00:47 crc kubenswrapper[4767]: I1124 22:00:47.528009 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" podStartSLOduration=2.113339669 podStartE2EDuration="10.527987589s" podCreationTimestamp="2025-11-24 22:00:37 +0000 UTC" firstStartedPulling="2025-11-24 22:00:38.782807721 +0000 UTC m=+1321.699791093" lastFinishedPulling="2025-11-24 22:00:47.197455641 +0000 UTC m=+1330.114439013" observedRunningTime="2025-11-24 22:00:47.523534963 +0000 UTC m=+1330.440518335" watchObservedRunningTime="2025-11-24 22:00:47.527987589 +0000 UTC m=+1330.444970971" Nov 24 22:00:47 crc kubenswrapper[4767]: I1124 22:00:47.579505 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 22:00:58 crc kubenswrapper[4767]: I1124 22:00:58.628238 4767 generic.go:334] "Generic (PLEG): container finished" podID="256a937c-fb13-42bf-b69f-140b9d8bad1d" containerID="3891721ad09a708b98964ce8348bb56c51ce7bab780862ab4dbbfd6644ffb435" exitCode=0 Nov 24 22:00:58 crc kubenswrapper[4767]: I1124 22:00:58.628344 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" event={"ID":"256a937c-fb13-42bf-b69f-140b9d8bad1d","Type":"ContainerDied","Data":"3891721ad09a708b98964ce8348bb56c51ce7bab780862ab4dbbfd6644ffb435"} Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.043701 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.148118 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400361-ffqrr"] Nov 24 22:01:00 crc kubenswrapper[4767]: E1124 22:01:00.148639 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256a937c-fb13-42bf-b69f-140b9d8bad1d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.148657 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="256a937c-fb13-42bf-b69f-140b9d8bad1d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.148871 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="256a937c-fb13-42bf-b69f-140b9d8bad1d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.149508 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.162015 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-ffqrr"] Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.192533 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-inventory\") pod \"256a937c-fb13-42bf-b69f-140b9d8bad1d\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.192849 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-repo-setup-combined-ca-bundle\") pod \"256a937c-fb13-42bf-b69f-140b9d8bad1d\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.192879 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-ssh-key\") pod \"256a937c-fb13-42bf-b69f-140b9d8bad1d\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.192898 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzwkv\" (UniqueName: \"kubernetes.io/projected/256a937c-fb13-42bf-b69f-140b9d8bad1d-kube-api-access-xzwkv\") pod \"256a937c-fb13-42bf-b69f-140b9d8bad1d\" (UID: \"256a937c-fb13-42bf-b69f-140b9d8bad1d\") " Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.193112 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-fernet-keys\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.193139 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-combined-ca-bundle\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.193241 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqbg8\" (UniqueName: \"kubernetes.io/projected/408f9b2f-5719-4224-859e-d583726e92aa-kube-api-access-nqbg8\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.193344 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-config-data\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.197592 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "256a937c-fb13-42bf-b69f-140b9d8bad1d" (UID: "256a937c-fb13-42bf-b69f-140b9d8bad1d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.198402 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256a937c-fb13-42bf-b69f-140b9d8bad1d-kube-api-access-xzwkv" (OuterVolumeSpecName: "kube-api-access-xzwkv") pod "256a937c-fb13-42bf-b69f-140b9d8bad1d" (UID: "256a937c-fb13-42bf-b69f-140b9d8bad1d"). InnerVolumeSpecName "kube-api-access-xzwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.219112 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "256a937c-fb13-42bf-b69f-140b9d8bad1d" (UID: "256a937c-fb13-42bf-b69f-140b9d8bad1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.221016 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-inventory" (OuterVolumeSpecName: "inventory") pod "256a937c-fb13-42bf-b69f-140b9d8bad1d" (UID: "256a937c-fb13-42bf-b69f-140b9d8bad1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294583 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-config-data\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294678 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-fernet-keys\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294701 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-combined-ca-bundle\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294771 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqbg8\" (UniqueName: \"kubernetes.io/projected/408f9b2f-5719-4224-859e-d583726e92aa-kube-api-access-nqbg8\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294847 4767 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294864 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294873 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzwkv\" (UniqueName: \"kubernetes.io/projected/256a937c-fb13-42bf-b69f-140b9d8bad1d-kube-api-access-xzwkv\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.294883 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/256a937c-fb13-42bf-b69f-140b9d8bad1d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.298404 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-config-data\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.298654 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-combined-ca-bundle\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.299130 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-fernet-keys\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.317176 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqbg8\" (UniqueName: \"kubernetes.io/projected/408f9b2f-5719-4224-859e-d583726e92aa-kube-api-access-nqbg8\") pod \"keystone-cron-29400361-ffqrr\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.466472 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.651707 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" event={"ID":"256a937c-fb13-42bf-b69f-140b9d8bad1d","Type":"ContainerDied","Data":"cf5fd7897ce5e2a29715f6b960a091278605df7ecc86bc49254df5820deddea1"} Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.651933 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5fd7897ce5e2a29715f6b960a091278605df7ecc86bc49254df5820deddea1" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.651809 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.727915 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-ffqrr"] Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.763751 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j"] Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.775596 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j"] Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.775744 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.777841 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.778492 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.778498 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.778724 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.909757 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.910375 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mbn\" (UniqueName: \"kubernetes.io/projected/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-kube-api-access-k2mbn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:00 crc kubenswrapper[4767]: I1124 22:01:00.910418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.012116 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mbn\" (UniqueName: \"kubernetes.io/projected/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-kube-api-access-k2mbn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.012170 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.012229 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.026349 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.027012 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.033032 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mbn\" (UniqueName: \"kubernetes.io/projected/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-kube-api-access-k2mbn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6jl5j\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.094217 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.628365 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j"] Nov 24 22:01:01 crc kubenswrapper[4767]: W1124 22:01:01.628697 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a70e2b9_04fb_4374_aacd_bdfb2cd8fd11.slice/crio-68ed098bc4049587877f5abf6011d0631c55af7e91b882ff2dde5f48e0377f02 WatchSource:0}: Error finding container 68ed098bc4049587877f5abf6011d0631c55af7e91b882ff2dde5f48e0377f02: Status 404 returned error can't find the container with id 68ed098bc4049587877f5abf6011d0631c55af7e91b882ff2dde5f48e0377f02 Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.667667 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" event={"ID":"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11","Type":"ContainerStarted","Data":"68ed098bc4049587877f5abf6011d0631c55af7e91b882ff2dde5f48e0377f02"} Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.670184 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-ffqrr" event={"ID":"408f9b2f-5719-4224-859e-d583726e92aa","Type":"ContainerStarted","Data":"7eeddcbca13bf652f2b6a755f11eaae1f61646ebbeb195da38f504d6b44e7381"} Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.670214 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-ffqrr" event={"ID":"408f9b2f-5719-4224-859e-d583726e92aa","Type":"ContainerStarted","Data":"c47cbb39f6f983d6fe010ed3def8c5d9e2f1d566df3f87d3218beea93f58bf15"} Nov 24 22:01:01 crc kubenswrapper[4767]: I1124 22:01:01.697399 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400361-ffqrr" podStartSLOduration=1.697374714 podStartE2EDuration="1.697374714s" podCreationTimestamp="2025-11-24 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:01:01.685511519 +0000 UTC m=+1344.602494911" watchObservedRunningTime="2025-11-24 22:01:01.697374714 +0000 UTC m=+1344.614358096" Nov 24 22:01:02 crc kubenswrapper[4767]: I1124 22:01:02.687531 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" event={"ID":"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11","Type":"ContainerStarted","Data":"197b2e232f215897f020072843094446c20912c7f4f7b4bfef45266d63627ab1"} Nov 24 22:01:02 crc kubenswrapper[4767]: I1124 22:01:02.715649 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" podStartSLOduration=2.158032077 podStartE2EDuration="2.715629083s" podCreationTimestamp="2025-11-24 22:01:00 +0000 UTC" firstStartedPulling="2025-11-24 22:01:01.633376663 +0000 UTC m=+1344.550360045" lastFinishedPulling="2025-11-24 22:01:02.190973679 +0000 UTC m=+1345.107957051" observedRunningTime="2025-11-24 22:01:02.706481384 +0000 UTC m=+1345.623464756" watchObservedRunningTime="2025-11-24 22:01:02.715629083 +0000 UTC m=+1345.632612455" Nov 24 22:01:03 crc kubenswrapper[4767]: I1124 22:01:03.702090 4767 generic.go:334] "Generic (PLEG): container finished" podID="408f9b2f-5719-4224-859e-d583726e92aa" containerID="7eeddcbca13bf652f2b6a755f11eaae1f61646ebbeb195da38f504d6b44e7381" exitCode=0 Nov 24 22:01:03 crc kubenswrapper[4767]: I1124 22:01:03.702165 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-ffqrr" event={"ID":"408f9b2f-5719-4224-859e-d583726e92aa","Type":"ContainerDied","Data":"7eeddcbca13bf652f2b6a755f11eaae1f61646ebbeb195da38f504d6b44e7381"} Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.196595 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.309767 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqbg8\" (UniqueName: \"kubernetes.io/projected/408f9b2f-5719-4224-859e-d583726e92aa-kube-api-access-nqbg8\") pod \"408f9b2f-5719-4224-859e-d583726e92aa\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.309904 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-fernet-keys\") pod \"408f9b2f-5719-4224-859e-d583726e92aa\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.309964 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-config-data\") pod \"408f9b2f-5719-4224-859e-d583726e92aa\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.310100 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-combined-ca-bundle\") pod \"408f9b2f-5719-4224-859e-d583726e92aa\" (UID: \"408f9b2f-5719-4224-859e-d583726e92aa\") " Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.315600 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "408f9b2f-5719-4224-859e-d583726e92aa" (UID: "408f9b2f-5719-4224-859e-d583726e92aa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.317658 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408f9b2f-5719-4224-859e-d583726e92aa-kube-api-access-nqbg8" (OuterVolumeSpecName: "kube-api-access-nqbg8") pod "408f9b2f-5719-4224-859e-d583726e92aa" (UID: "408f9b2f-5719-4224-859e-d583726e92aa"). InnerVolumeSpecName "kube-api-access-nqbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.345617 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "408f9b2f-5719-4224-859e-d583726e92aa" (UID: "408f9b2f-5719-4224-859e-d583726e92aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.367802 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-config-data" (OuterVolumeSpecName: "config-data") pod "408f9b2f-5719-4224-859e-d583726e92aa" (UID: "408f9b2f-5719-4224-859e-d583726e92aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.413214 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqbg8\" (UniqueName: \"kubernetes.io/projected/408f9b2f-5719-4224-859e-d583726e92aa-kube-api-access-nqbg8\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.413290 4767 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.413310 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.413328 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408f9b2f-5719-4224-859e-d583726e92aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.730185 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-ffqrr" event={"ID":"408f9b2f-5719-4224-859e-d583726e92aa","Type":"ContainerDied","Data":"c47cbb39f6f983d6fe010ed3def8c5d9e2f1d566df3f87d3218beea93f58bf15"} Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.730450 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47cbb39f6f983d6fe010ed3def8c5d9e2f1d566df3f87d3218beea93f58bf15" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.730211 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-ffqrr" Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.732290 4767 generic.go:334] "Generic (PLEG): container finished" podID="6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11" containerID="197b2e232f215897f020072843094446c20912c7f4f7b4bfef45266d63627ab1" exitCode=0 Nov 24 22:01:05 crc kubenswrapper[4767]: I1124 22:01:05.732335 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" event={"ID":"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11","Type":"ContainerDied","Data":"197b2e232f215897f020072843094446c20912c7f4f7b4bfef45266d63627ab1"} Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.141790 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.249259 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-inventory\") pod \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.249598 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2mbn\" (UniqueName: \"kubernetes.io/projected/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-kube-api-access-k2mbn\") pod \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.249656 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-ssh-key\") pod \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\" (UID: \"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11\") " Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.255627 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-kube-api-access-k2mbn" (OuterVolumeSpecName: "kube-api-access-k2mbn") pod "6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11" (UID: "6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11"). InnerVolumeSpecName "kube-api-access-k2mbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.277377 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-inventory" (OuterVolumeSpecName: "inventory") pod "6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11" (UID: "6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.280049 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11" (UID: "6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.352208 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2mbn\" (UniqueName: \"kubernetes.io/projected/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-kube-api-access-k2mbn\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.352261 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.352299 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.754167 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" event={"ID":"6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11","Type":"ContainerDied","Data":"68ed098bc4049587877f5abf6011d0631c55af7e91b882ff2dde5f48e0377f02"} Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.754198 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ed098bc4049587877f5abf6011d0631c55af7e91b882ff2dde5f48e0377f02" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.754210 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6jl5j" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.829263 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb"] Nov 24 22:01:07 crc kubenswrapper[4767]: E1124 22:01:07.829892 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.829914 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:07 crc kubenswrapper[4767]: E1124 22:01:07.829935 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408f9b2f-5719-4224-859e-d583726e92aa" containerName="keystone-cron" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.829944 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="408f9b2f-5719-4224-859e-d583726e92aa" containerName="keystone-cron" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.830233 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.830286 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="408f9b2f-5719-4224-859e-d583726e92aa" containerName="keystone-cron" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.831118 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.833809 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.834124 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.834561 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.835082 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.847291 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb"] Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.964498 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xjz\" (UniqueName: \"kubernetes.io/projected/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-kube-api-access-s6xjz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.964612 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.964805 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:07 crc kubenswrapper[4767]: I1124 22:01:07.965005 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.070883 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.071259 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.071455 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xjz\" (UniqueName: \"kubernetes.io/projected/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-kube-api-access-s6xjz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.071534 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.077608 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.079783 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.082990 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.101573 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xjz\" (UniqueName: \"kubernetes.io/projected/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-kube-api-access-s6xjz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.152082 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.736897 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb"] Nov 24 22:01:08 crc kubenswrapper[4767]: I1124 22:01:08.766014 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" event={"ID":"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc","Type":"ContainerStarted","Data":"274699a94db38831d6d344316bd4d3ca38d6571d6d313a8274fa56e1e6ac56ed"} Nov 24 22:01:09 crc kubenswrapper[4767]: I1124 22:01:09.779111 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" event={"ID":"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc","Type":"ContainerStarted","Data":"8b9697a16175f12169748abbe0ed801bc8ba3e3cd697aced3e47df5d80e6160f"} Nov 24 22:01:09 crc kubenswrapper[4767]: I1124 22:01:09.800464 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" podStartSLOduration=2.3071514300000002 podStartE2EDuration="2.800428285s" podCreationTimestamp="2025-11-24 22:01:07 +0000 UTC" firstStartedPulling="2025-11-24 22:01:08.744608563 +0000 UTC m=+1351.661591935" lastFinishedPulling="2025-11-24 22:01:09.237885378 +0000 UTC m=+1352.154868790" observedRunningTime="2025-11-24 22:01:09.799607752 +0000 UTC m=+1352.716591134" watchObservedRunningTime="2025-11-24 22:01:09.800428285 +0000 UTC m=+1352.717411657" Nov 24 22:01:40 crc kubenswrapper[4767]: I1124 22:01:40.516010 4767 scope.go:117] "RemoveContainer" containerID="0f4318efa102b2021ecbb190e993c55ef88e68e1b29c03ab540f8048b98d3c08" Nov 24 22:01:40 crc kubenswrapper[4767]: I1124 22:01:40.560429 4767 scope.go:117] "RemoveContainer" containerID="52447f25e25ed7a7fe19296f5a720d733bfe1885d025d991e1c25d2d1e789a46" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.340414 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9mc5x"] Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.344437 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.352512 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mc5x"] Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.423022 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-catalog-content\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.423301 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-utilities\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.423495 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvh4\" (UniqueName: \"kubernetes.io/projected/192a3297-dbba-4dd7-aab1-89a4c49a78be-kube-api-access-grvh4\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.524984 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvh4\" (UniqueName: \"kubernetes.io/projected/192a3297-dbba-4dd7-aab1-89a4c49a78be-kube-api-access-grvh4\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.525348 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-catalog-content\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.525485 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-utilities\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.525931 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-catalog-content\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.526238 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-utilities\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.547631 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvh4\" (UniqueName: \"kubernetes.io/projected/192a3297-dbba-4dd7-aab1-89a4c49a78be-kube-api-access-grvh4\") pod \"redhat-operators-9mc5x\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:34 crc kubenswrapper[4767]: I1124 22:02:34.675133 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:35 crc kubenswrapper[4767]: I1124 22:02:35.132682 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mc5x"] Nov 24 22:02:35 crc kubenswrapper[4767]: W1124 22:02:35.141584 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod192a3297_dbba_4dd7_aab1_89a4c49a78be.slice/crio-702cbc402b3807adc97e11eca96b6c9df0bf972a7c20ec05052d7697ef83bd7a WatchSource:0}: Error finding container 702cbc402b3807adc97e11eca96b6c9df0bf972a7c20ec05052d7697ef83bd7a: Status 404 returned error can't find the container with id 702cbc402b3807adc97e11eca96b6c9df0bf972a7c20ec05052d7697ef83bd7a Nov 24 22:02:35 crc kubenswrapper[4767]: I1124 22:02:35.481220 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:02:35 crc kubenswrapper[4767]: I1124 22:02:35.481555 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:02:36 crc kubenswrapper[4767]: I1124 22:02:36.001150 4767 generic.go:334] "Generic (PLEG): container finished" podID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerID="717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f" exitCode=0 Nov 24 22:02:36 crc kubenswrapper[4767]: I1124 22:02:36.001199 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mc5x" event={"ID":"192a3297-dbba-4dd7-aab1-89a4c49a78be","Type":"ContainerDied","Data":"717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f"} Nov 24 22:02:36 crc kubenswrapper[4767]: I1124 22:02:36.001228 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mc5x" event={"ID":"192a3297-dbba-4dd7-aab1-89a4c49a78be","Type":"ContainerStarted","Data":"702cbc402b3807adc97e11eca96b6c9df0bf972a7c20ec05052d7697ef83bd7a"} Nov 24 22:02:37 crc kubenswrapper[4767]: I1124 22:02:37.012003 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mc5x" event={"ID":"192a3297-dbba-4dd7-aab1-89a4c49a78be","Type":"ContainerStarted","Data":"3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf"} Nov 24 22:02:39 crc kubenswrapper[4767]: I1124 22:02:39.040515 4767 generic.go:334] "Generic (PLEG): container finished" podID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerID="3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf" exitCode=0 Nov 24 22:02:39 crc kubenswrapper[4767]: I1124 22:02:39.040566 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mc5x" event={"ID":"192a3297-dbba-4dd7-aab1-89a4c49a78be","Type":"ContainerDied","Data":"3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf"} Nov 24 22:02:40 crc kubenswrapper[4767]: I1124 22:02:40.058967 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mc5x" event={"ID":"192a3297-dbba-4dd7-aab1-89a4c49a78be","Type":"ContainerStarted","Data":"694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2"} Nov 24 22:02:40 crc kubenswrapper[4767]: I1124 22:02:40.078482 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9mc5x" podStartSLOduration=2.370406755 podStartE2EDuration="6.078454378s" podCreationTimestamp="2025-11-24 22:02:34 +0000 UTC" firstStartedPulling="2025-11-24 22:02:36.003559108 +0000 UTC m=+1438.920542480" lastFinishedPulling="2025-11-24 22:02:39.711606701 +0000 UTC m=+1442.628590103" observedRunningTime="2025-11-24 22:02:40.075671349 +0000 UTC m=+1442.992654731" watchObservedRunningTime="2025-11-24 22:02:40.078454378 +0000 UTC m=+1442.995437770" Nov 24 22:02:40 crc kubenswrapper[4767]: I1124 22:02:40.673606 4767 scope.go:117] "RemoveContainer" containerID="fb2e1cac4f1b51fc87973ad0d5819cdeb2b226eadcdb476e13469aa671999581" Nov 24 22:02:40 crc kubenswrapper[4767]: I1124 22:02:40.703570 4767 scope.go:117] "RemoveContainer" containerID="aedb0dbece973442c87095e420dec4503758c6c0eddabc0ed38179a3961be402" Nov 24 22:02:40 crc kubenswrapper[4767]: I1124 22:02:40.736365 4767 scope.go:117] "RemoveContainer" containerID="efa627a463412baccc8a672fb208753e727137216839867333d72081681dd5b1" Nov 24 22:02:40 crc kubenswrapper[4767]: I1124 22:02:40.799376 4767 scope.go:117] "RemoveContainer" containerID="297a33e19d485c3416d016d3124410d90109adebf8bebbd6d7e096327223c6bb" Nov 24 22:02:40 crc kubenswrapper[4767]: I1124 22:02:40.822628 4767 scope.go:117] "RemoveContainer" containerID="c5cede5ea26ecf48759374285f4500a72b56290d5a6897c73460e5862105b6a7" Nov 24 22:02:44 crc kubenswrapper[4767]: I1124 22:02:44.675958 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:44 crc kubenswrapper[4767]: I1124 22:02:44.676597 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:45 crc kubenswrapper[4767]: I1124 22:02:45.722755 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9mc5x" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="registry-server" probeResult="failure" output=< Nov 24 22:02:45 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 22:02:45 crc kubenswrapper[4767]: > Nov 24 22:02:54 crc kubenswrapper[4767]: I1124 22:02:54.726932 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:54 crc kubenswrapper[4767]: I1124 22:02:54.803698 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:54 crc kubenswrapper[4767]: I1124 22:02:54.980912 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mc5x"] Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.235129 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9mc5x" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="registry-server" containerID="cri-o://694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2" gracePeriod=2 Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.667043 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.771097 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-catalog-content\") pod \"192a3297-dbba-4dd7-aab1-89a4c49a78be\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.771173 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grvh4\" (UniqueName: \"kubernetes.io/projected/192a3297-dbba-4dd7-aab1-89a4c49a78be-kube-api-access-grvh4\") pod \"192a3297-dbba-4dd7-aab1-89a4c49a78be\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.771389 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-utilities\") pod \"192a3297-dbba-4dd7-aab1-89a4c49a78be\" (UID: \"192a3297-dbba-4dd7-aab1-89a4c49a78be\") " Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.772003 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-utilities" (OuterVolumeSpecName: "utilities") pod "192a3297-dbba-4dd7-aab1-89a4c49a78be" (UID: "192a3297-dbba-4dd7-aab1-89a4c49a78be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.776387 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a3297-dbba-4dd7-aab1-89a4c49a78be-kube-api-access-grvh4" (OuterVolumeSpecName: "kube-api-access-grvh4") pod "192a3297-dbba-4dd7-aab1-89a4c49a78be" (UID: "192a3297-dbba-4dd7-aab1-89a4c49a78be"). InnerVolumeSpecName "kube-api-access-grvh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.852065 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "192a3297-dbba-4dd7-aab1-89a4c49a78be" (UID: "192a3297-dbba-4dd7-aab1-89a4c49a78be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.873540 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.873774 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a3297-dbba-4dd7-aab1-89a4c49a78be-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:02:56 crc kubenswrapper[4767]: I1124 22:02:56.873789 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grvh4\" (UniqueName: \"kubernetes.io/projected/192a3297-dbba-4dd7-aab1-89a4c49a78be-kube-api-access-grvh4\") on node \"crc\" DevicePath \"\"" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.250836 4767 generic.go:334] "Generic (PLEG): container finished" podID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerID="694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2" exitCode=0 Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.250897 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mc5x" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.250894 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mc5x" event={"ID":"192a3297-dbba-4dd7-aab1-89a4c49a78be","Type":"ContainerDied","Data":"694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2"} Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.251065 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mc5x" event={"ID":"192a3297-dbba-4dd7-aab1-89a4c49a78be","Type":"ContainerDied","Data":"702cbc402b3807adc97e11eca96b6c9df0bf972a7c20ec05052d7697ef83bd7a"} Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.251084 4767 scope.go:117] "RemoveContainer" containerID="694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.272652 4767 scope.go:117] "RemoveContainer" containerID="3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.299090 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mc5x"] Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.302101 4767 scope.go:117] "RemoveContainer" containerID="717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.313252 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9mc5x"] Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.353656 4767 scope.go:117] "RemoveContainer" containerID="694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2" Nov 24 22:02:57 crc kubenswrapper[4767]: E1124 22:02:57.354091 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2\": container with ID starting with 694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2 not found: ID does not exist" containerID="694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.354142 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2"} err="failed to get container status \"694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2\": rpc error: code = NotFound desc = could not find container \"694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2\": container with ID starting with 694c76efe4f721211b5def38f8752af8d6f50aea51f8a0848d18d858519090d2 not found: ID does not exist" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.354176 4767 scope.go:117] "RemoveContainer" containerID="3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf" Nov 24 22:02:57 crc kubenswrapper[4767]: E1124 22:02:57.354636 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf\": container with ID starting with 3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf not found: ID does not exist" containerID="3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.354671 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf"} err="failed to get container status \"3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf\": rpc error: code = NotFound desc = could not find container \"3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf\": container with ID starting with 3a0621dc4280ab40f955ead4b2f8081d7ce63f9a40f1701537917c06f595d5cf not found: ID does not exist" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.354693 4767 scope.go:117] "RemoveContainer" containerID="717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f" Nov 24 22:02:57 crc kubenswrapper[4767]: E1124 22:02:57.354969 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f\": container with ID starting with 717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f not found: ID does not exist" containerID="717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f" Nov 24 22:02:57 crc kubenswrapper[4767]: I1124 22:02:57.355019 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f"} err="failed to get container status \"717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f\": rpc error: code = NotFound desc = could not find container \"717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f\": container with ID starting with 717bd5f3523689bc2b3da3c54e364e8095d23ef685fe1011da836b5143a16e6f not found: ID does not exist" Nov 24 22:02:58 crc kubenswrapper[4767]: I1124 22:02:58.328057 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" path="/var/lib/kubelet/pods/192a3297-dbba-4dd7-aab1-89a4c49a78be/volumes" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.591191 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qc4b9"] Nov 24 22:02:59 crc kubenswrapper[4767]: E1124 22:02:59.591667 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="extract-content" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.591684 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="extract-content" Nov 24 22:02:59 crc kubenswrapper[4767]: E1124 22:02:59.591706 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="extract-utilities" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.591712 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="extract-utilities" Nov 24 22:02:59 crc kubenswrapper[4767]: E1124 22:02:59.591735 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="registry-server" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.591742 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="registry-server" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.591931 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="192a3297-dbba-4dd7-aab1-89a4c49a78be" containerName="registry-server" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.593423 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.607717 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc4b9"] Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.734938 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8xq\" (UniqueName: \"kubernetes.io/projected/307781c7-f2c9-40ea-9079-dc74e4cd04c9-kube-api-access-8n8xq\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.735037 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-catalog-content\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.735082 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-utilities\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.836335 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8xq\" (UniqueName: \"kubernetes.io/projected/307781c7-f2c9-40ea-9079-dc74e4cd04c9-kube-api-access-8n8xq\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.836380 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-catalog-content\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.836407 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-utilities\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.836969 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-utilities\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.837059 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-catalog-content\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.872209 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8xq\" (UniqueName: \"kubernetes.io/projected/307781c7-f2c9-40ea-9079-dc74e4cd04c9-kube-api-access-8n8xq\") pod \"redhat-marketplace-qc4b9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:02:59 crc kubenswrapper[4767]: I1124 22:02:59.918434 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:03:00 crc kubenswrapper[4767]: I1124 22:03:00.363854 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc4b9"] Nov 24 22:03:01 crc kubenswrapper[4767]: I1124 22:03:01.294877 4767 generic.go:334] "Generic (PLEG): container finished" podID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerID="1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391" exitCode=0 Nov 24 22:03:01 crc kubenswrapper[4767]: I1124 22:03:01.294944 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc4b9" event={"ID":"307781c7-f2c9-40ea-9079-dc74e4cd04c9","Type":"ContainerDied","Data":"1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391"} Nov 24 22:03:01 crc kubenswrapper[4767]: I1124 22:03:01.295208 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc4b9" event={"ID":"307781c7-f2c9-40ea-9079-dc74e4cd04c9","Type":"ContainerStarted","Data":"86e759bbcec25ba991f4a8ce0e2a63c5c1c3f1910e9495dd6c05b74be5a7b907"} Nov 24 22:03:02 crc kubenswrapper[4767]: I1124 22:03:02.306311 4767 generic.go:334] "Generic (PLEG): container finished" podID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerID="e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578" exitCode=0 Nov 24 22:03:02 crc kubenswrapper[4767]: I1124 22:03:02.306397 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc4b9" event={"ID":"307781c7-f2c9-40ea-9079-dc74e4cd04c9","Type":"ContainerDied","Data":"e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578"} Nov 24 22:03:03 crc kubenswrapper[4767]: I1124 22:03:03.328943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc4b9" event={"ID":"307781c7-f2c9-40ea-9079-dc74e4cd04c9","Type":"ContainerStarted","Data":"498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27"} Nov 24 22:03:03 crc kubenswrapper[4767]: I1124 22:03:03.352047 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qc4b9" podStartSLOduration=2.7322591000000003 podStartE2EDuration="4.352028676s" podCreationTimestamp="2025-11-24 22:02:59 +0000 UTC" firstStartedPulling="2025-11-24 22:03:01.297353264 +0000 UTC m=+1464.214336636" lastFinishedPulling="2025-11-24 22:03:02.91712284 +0000 UTC m=+1465.834106212" observedRunningTime="2025-11-24 22:03:03.347624512 +0000 UTC m=+1466.264607894" watchObservedRunningTime="2025-11-24 22:03:03.352028676 +0000 UTC m=+1466.269012048" Nov 24 22:03:05 crc kubenswrapper[4767]: I1124 22:03:05.481697 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:03:05 crc kubenswrapper[4767]: I1124 22:03:05.482046 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:03:09 crc kubenswrapper[4767]: I1124 22:03:09.918927 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:03:09 crc kubenswrapper[4767]: I1124 22:03:09.919995 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:03:09 crc kubenswrapper[4767]: I1124 22:03:09.969435 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:03:10 crc kubenswrapper[4767]: I1124 22:03:10.442084 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:03:10 crc kubenswrapper[4767]: I1124 22:03:10.496112 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc4b9"] Nov 24 22:03:12 crc kubenswrapper[4767]: I1124 22:03:12.421064 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qc4b9" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="registry-server" containerID="cri-o://498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27" gracePeriod=2 Nov 24 22:03:12 crc kubenswrapper[4767]: I1124 22:03:12.898940 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:03:12 crc kubenswrapper[4767]: I1124 22:03:12.987068 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-catalog-content\") pod \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " Nov 24 22:03:12 crc kubenswrapper[4767]: I1124 22:03:12.987105 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-utilities\") pod \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " Nov 24 22:03:12 crc kubenswrapper[4767]: I1124 22:03:12.987301 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8xq\" (UniqueName: \"kubernetes.io/projected/307781c7-f2c9-40ea-9079-dc74e4cd04c9-kube-api-access-8n8xq\") pod \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\" (UID: \"307781c7-f2c9-40ea-9079-dc74e4cd04c9\") " Nov 24 22:03:12 crc kubenswrapper[4767]: I1124 22:03:12.988033 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-utilities" (OuterVolumeSpecName: "utilities") pod "307781c7-f2c9-40ea-9079-dc74e4cd04c9" (UID: "307781c7-f2c9-40ea-9079-dc74e4cd04c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:03:12 crc kubenswrapper[4767]: I1124 22:03:12.992875 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/307781c7-f2c9-40ea-9079-dc74e4cd04c9-kube-api-access-8n8xq" (OuterVolumeSpecName: "kube-api-access-8n8xq") pod "307781c7-f2c9-40ea-9079-dc74e4cd04c9" (UID: "307781c7-f2c9-40ea-9079-dc74e4cd04c9"). InnerVolumeSpecName "kube-api-access-8n8xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.004788 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "307781c7-f2c9-40ea-9079-dc74e4cd04c9" (UID: "307781c7-f2c9-40ea-9079-dc74e4cd04c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.090191 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n8xq\" (UniqueName: \"kubernetes.io/projected/307781c7-f2c9-40ea-9079-dc74e4cd04c9-kube-api-access-8n8xq\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.090229 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.090251 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/307781c7-f2c9-40ea-9079-dc74e4cd04c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.435980 4767 generic.go:334] "Generic (PLEG): container finished" podID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerID="498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27" exitCode=0 Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.436039 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc4b9" event={"ID":"307781c7-f2c9-40ea-9079-dc74e4cd04c9","Type":"ContainerDied","Data":"498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27"} Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.436075 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc4b9" event={"ID":"307781c7-f2c9-40ea-9079-dc74e4cd04c9","Type":"ContainerDied","Data":"86e759bbcec25ba991f4a8ce0e2a63c5c1c3f1910e9495dd6c05b74be5a7b907"} Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.436097 4767 scope.go:117] "RemoveContainer" containerID="498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.436253 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc4b9" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.474667 4767 scope.go:117] "RemoveContainer" containerID="e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.477056 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc4b9"] Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.486083 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc4b9"] Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.509803 4767 scope.go:117] "RemoveContainer" containerID="1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.559989 4767 scope.go:117] "RemoveContainer" containerID="498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27" Nov 24 22:03:13 crc kubenswrapper[4767]: E1124 22:03:13.560566 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27\": container with ID starting with 498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27 not found: ID does not exist" containerID="498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.560637 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27"} err="failed to get container status \"498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27\": rpc error: code = NotFound desc = could not find container \"498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27\": container with ID starting with 498240c57804c12eef23fa816f36df58005da9352e53a5b3ca0b3349a541cb27 not found: ID does not exist" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.560691 4767 scope.go:117] "RemoveContainer" containerID="e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578" Nov 24 22:03:13 crc kubenswrapper[4767]: E1124 22:03:13.561187 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578\": container with ID starting with e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578 not found: ID does not exist" containerID="e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.561239 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578"} err="failed to get container status \"e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578\": rpc error: code = NotFound desc = could not find container \"e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578\": container with ID starting with e268bb779dd80a861ebe0527415f91a23d316fcd386eab7e7130b81460101578 not found: ID does not exist" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.561281 4767 scope.go:117] "RemoveContainer" containerID="1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391" Nov 24 22:03:13 crc kubenswrapper[4767]: E1124 22:03:13.561628 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391\": container with ID starting with 1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391 not found: ID does not exist" containerID="1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391" Nov 24 22:03:13 crc kubenswrapper[4767]: I1124 22:03:13.561658 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391"} err="failed to get container status \"1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391\": rpc error: code = NotFound desc = could not find container \"1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391\": container with ID starting with 1239cc10e389d1480e49f93f264687cda82a31847203580bcec431dfb2c63391 not found: ID does not exist" Nov 24 22:03:14 crc kubenswrapper[4767]: I1124 22:03:14.326984 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" path="/var/lib/kubelet/pods/307781c7-f2c9-40ea-9079-dc74e4cd04c9/volumes" Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.481038 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.481701 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.481765 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.482858 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.482959 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" gracePeriod=600 Nov 24 22:03:35 crc kubenswrapper[4767]: E1124 22:03:35.619090 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.717140 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" exitCode=0 Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.717238 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c"} Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.717352 4767 scope.go:117] "RemoveContainer" containerID="cb71cfb4f27344cb7cceaf9ac7651774b144254e6ab13360f5b5c998afd38e04" Nov 24 22:03:35 crc kubenswrapper[4767]: I1124 22:03:35.718347 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:03:35 crc kubenswrapper[4767]: E1124 22:03:35.718842 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.467864 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jd59"] Nov 24 22:03:43 crc kubenswrapper[4767]: E1124 22:03:43.468949 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="registry-server" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.468966 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="registry-server" Nov 24 22:03:43 crc kubenswrapper[4767]: E1124 22:03:43.468980 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="extract-utilities" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.468991 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="extract-utilities" Nov 24 22:03:43 crc kubenswrapper[4767]: E1124 22:03:43.469008 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="extract-content" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.469016 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="extract-content" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.469353 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="307781c7-f2c9-40ea-9079-dc74e4cd04c9" containerName="registry-server" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.473165 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.504779 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jd59"] Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.613569 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-utilities\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.613718 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-catalog-content\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.613754 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkbd\" (UniqueName: \"kubernetes.io/projected/678c3464-36bc-405f-94ec-055138150037-kube-api-access-phkbd\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.715083 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-utilities\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.715170 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-catalog-content\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.715195 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkbd\" (UniqueName: \"kubernetes.io/projected/678c3464-36bc-405f-94ec-055138150037-kube-api-access-phkbd\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.715730 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-utilities\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.715764 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-catalog-content\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.737125 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkbd\" (UniqueName: \"kubernetes.io/projected/678c3464-36bc-405f-94ec-055138150037-kube-api-access-phkbd\") pod \"certified-operators-8jd59\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:43 crc kubenswrapper[4767]: I1124 22:03:43.807870 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:44 crc kubenswrapper[4767]: I1124 22:03:44.310339 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jd59"] Nov 24 22:03:44 crc kubenswrapper[4767]: I1124 22:03:44.807080 4767 generic.go:334] "Generic (PLEG): container finished" podID="678c3464-36bc-405f-94ec-055138150037" containerID="f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec" exitCode=0 Nov 24 22:03:44 crc kubenswrapper[4767]: I1124 22:03:44.807132 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jd59" event={"ID":"678c3464-36bc-405f-94ec-055138150037","Type":"ContainerDied","Data":"f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec"} Nov 24 22:03:44 crc kubenswrapper[4767]: I1124 22:03:44.807707 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jd59" event={"ID":"678c3464-36bc-405f-94ec-055138150037","Type":"ContainerStarted","Data":"d2e205eb1b64cb7c2295650491ae9932c292ee38dc2ff6867e0af3d70c3f3b9c"} Nov 24 22:03:44 crc kubenswrapper[4767]: I1124 22:03:44.809600 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:03:45 crc kubenswrapper[4767]: I1124 22:03:45.819486 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jd59" event={"ID":"678c3464-36bc-405f-94ec-055138150037","Type":"ContainerStarted","Data":"5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e"} Nov 24 22:03:46 crc kubenswrapper[4767]: I1124 22:03:46.835288 4767 generic.go:334] "Generic (PLEG): container finished" podID="678c3464-36bc-405f-94ec-055138150037" containerID="5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e" exitCode=0 Nov 24 22:03:46 crc kubenswrapper[4767]: I1124 22:03:46.835356 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jd59" event={"ID":"678c3464-36bc-405f-94ec-055138150037","Type":"ContainerDied","Data":"5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e"} Nov 24 22:03:47 crc kubenswrapper[4767]: I1124 22:03:47.313526 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:03:47 crc kubenswrapper[4767]: E1124 22:03:47.314094 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:03:47 crc kubenswrapper[4767]: I1124 22:03:47.849140 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jd59" event={"ID":"678c3464-36bc-405f-94ec-055138150037","Type":"ContainerStarted","Data":"2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8"} Nov 24 22:03:47 crc kubenswrapper[4767]: I1124 22:03:47.892693 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jd59" podStartSLOduration=2.240392047 podStartE2EDuration="4.892670636s" podCreationTimestamp="2025-11-24 22:03:43 +0000 UTC" firstStartedPulling="2025-11-24 22:03:44.809303494 +0000 UTC m=+1507.726286876" lastFinishedPulling="2025-11-24 22:03:47.461582093 +0000 UTC m=+1510.378565465" observedRunningTime="2025-11-24 22:03:47.87796263 +0000 UTC m=+1510.794946022" watchObservedRunningTime="2025-11-24 22:03:47.892670636 +0000 UTC m=+1510.809654008" Nov 24 22:03:53 crc kubenswrapper[4767]: I1124 22:03:53.808343 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:53 crc kubenswrapper[4767]: I1124 22:03:53.808896 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:53 crc kubenswrapper[4767]: I1124 22:03:53.917901 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:53 crc kubenswrapper[4767]: I1124 22:03:53.990950 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:54 crc kubenswrapper[4767]: I1124 22:03:54.179925 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jd59"] Nov 24 22:03:55 crc kubenswrapper[4767]: I1124 22:03:55.938120 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jd59" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="registry-server" containerID="cri-o://2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8" gracePeriod=2 Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.639550 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.693016 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phkbd\" (UniqueName: \"kubernetes.io/projected/678c3464-36bc-405f-94ec-055138150037-kube-api-access-phkbd\") pod \"678c3464-36bc-405f-94ec-055138150037\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.693386 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-catalog-content\") pod \"678c3464-36bc-405f-94ec-055138150037\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.693472 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-utilities\") pod \"678c3464-36bc-405f-94ec-055138150037\" (UID: \"678c3464-36bc-405f-94ec-055138150037\") " Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.695068 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-utilities" (OuterVolumeSpecName: "utilities") pod "678c3464-36bc-405f-94ec-055138150037" (UID: "678c3464-36bc-405f-94ec-055138150037"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.699395 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678c3464-36bc-405f-94ec-055138150037-kube-api-access-phkbd" (OuterVolumeSpecName: "kube-api-access-phkbd") pod "678c3464-36bc-405f-94ec-055138150037" (UID: "678c3464-36bc-405f-94ec-055138150037"). InnerVolumeSpecName "kube-api-access-phkbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.752905 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "678c3464-36bc-405f-94ec-055138150037" (UID: "678c3464-36bc-405f-94ec-055138150037"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.795584 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phkbd\" (UniqueName: \"kubernetes.io/projected/678c3464-36bc-405f-94ec-055138150037-kube-api-access-phkbd\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.795615 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.795625 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678c3464-36bc-405f-94ec-055138150037-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.970065 4767 generic.go:334] "Generic (PLEG): container finished" podID="678c3464-36bc-405f-94ec-055138150037" containerID="2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8" exitCode=0 Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.970147 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jd59" event={"ID":"678c3464-36bc-405f-94ec-055138150037","Type":"ContainerDied","Data":"2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8"} Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.970393 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jd59" Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.972447 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jd59" event={"ID":"678c3464-36bc-405f-94ec-055138150037","Type":"ContainerDied","Data":"d2e205eb1b64cb7c2295650491ae9932c292ee38dc2ff6867e0af3d70c3f3b9c"} Nov 24 22:03:56 crc kubenswrapper[4767]: I1124 22:03:56.972527 4767 scope.go:117] "RemoveContainer" containerID="2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.003910 4767 scope.go:117] "RemoveContainer" containerID="5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.008750 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jd59"] Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.018940 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jd59"] Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.042701 4767 scope.go:117] "RemoveContainer" containerID="f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.081625 4767 scope.go:117] "RemoveContainer" containerID="2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8" Nov 24 22:03:57 crc kubenswrapper[4767]: E1124 22:03:57.082139 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8\": container with ID starting with 2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8 not found: ID does not exist" containerID="2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.082194 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8"} err="failed to get container status \"2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8\": rpc error: code = NotFound desc = could not find container \"2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8\": container with ID starting with 2004a9f325e88ec8596f2486c49d2b134dc1c4375f52e599d54679fae4f19ee8 not found: ID does not exist" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.082227 4767 scope.go:117] "RemoveContainer" containerID="5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e" Nov 24 22:03:57 crc kubenswrapper[4767]: E1124 22:03:57.083034 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e\": container with ID starting with 5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e not found: ID does not exist" containerID="5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.083076 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e"} err="failed to get container status \"5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e\": rpc error: code = NotFound desc = could not find container \"5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e\": container with ID starting with 5d9708fc4c48ff8d2c3800db4da90d471dd16af4583639bffd863fd59df21f5e not found: ID does not exist" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.083104 4767 scope.go:117] "RemoveContainer" containerID="f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec" Nov 24 22:03:57 crc kubenswrapper[4767]: E1124 22:03:57.083429 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec\": container with ID starting with f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec not found: ID does not exist" containerID="f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec" Nov 24 22:03:57 crc kubenswrapper[4767]: I1124 22:03:57.083447 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec"} err="failed to get container status \"f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec\": rpc error: code = NotFound desc = could not find container \"f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec\": container with ID starting with f0f26de6e24202148b3548e2c5ae606bbf1afb2a7ba144b85984a6fc89f16aec not found: ID does not exist" Nov 24 22:03:58 crc kubenswrapper[4767]: I1124 22:03:58.323456 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:03:58 crc kubenswrapper[4767]: E1124 22:03:58.324052 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:03:58 crc kubenswrapper[4767]: I1124 22:03:58.327675 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678c3464-36bc-405f-94ec-055138150037" path="/var/lib/kubelet/pods/678c3464-36bc-405f-94ec-055138150037/volumes" Nov 24 22:04:10 crc kubenswrapper[4767]: I1124 22:04:10.314575 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:04:10 crc kubenswrapper[4767]: E1124 22:04:10.315683 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:04:22 crc kubenswrapper[4767]: I1124 22:04:22.314259 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:04:22 crc kubenswrapper[4767]: E1124 22:04:22.315589 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:04:23 crc kubenswrapper[4767]: I1124 22:04:23.255340 4767 generic.go:334] "Generic (PLEG): container finished" podID="4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" containerID="8b9697a16175f12169748abbe0ed801bc8ba3e3cd697aced3e47df5d80e6160f" exitCode=0 Nov 24 22:04:23 crc kubenswrapper[4767]: I1124 22:04:23.255458 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" event={"ID":"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc","Type":"ContainerDied","Data":"8b9697a16175f12169748abbe0ed801bc8ba3e3cd697aced3e47df5d80e6160f"} Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.801200 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.941359 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-ssh-key\") pod \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.941572 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6xjz\" (UniqueName: \"kubernetes.io/projected/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-kube-api-access-s6xjz\") pod \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.941619 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-inventory\") pod \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.941645 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-bootstrap-combined-ca-bundle\") pod \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\" (UID: \"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc\") " Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.948094 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" (UID: "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.948215 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-kube-api-access-s6xjz" (OuterVolumeSpecName: "kube-api-access-s6xjz") pod "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" (UID: "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc"). InnerVolumeSpecName "kube-api-access-s6xjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.971114 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-inventory" (OuterVolumeSpecName: "inventory") pod "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" (UID: "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:04:24 crc kubenswrapper[4767]: I1124 22:04:24.979535 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" (UID: "4f4e8bd7-4b90-4d32-b3f3-36011d7820bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.044778 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6xjz\" (UniqueName: \"kubernetes.io/projected/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-kube-api-access-s6xjz\") on node \"crc\" DevicePath \"\"" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.044811 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.044820 4767 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.044829 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f4e8bd7-4b90-4d32-b3f3-36011d7820bc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.283343 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" event={"ID":"4f4e8bd7-4b90-4d32-b3f3-36011d7820bc","Type":"ContainerDied","Data":"274699a94db38831d6d344316bd4d3ca38d6571d6d313a8274fa56e1e6ac56ed"} Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.283988 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="274699a94db38831d6d344316bd4d3ca38d6571d6d313a8274fa56e1e6ac56ed" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.283422 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.376283 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz"] Nov 24 22:04:25 crc kubenswrapper[4767]: E1124 22:04:25.376657 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="registry-server" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.376673 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="registry-server" Nov 24 22:04:25 crc kubenswrapper[4767]: E1124 22:04:25.376687 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="extract-content" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.376693 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="extract-content" Nov 24 22:04:25 crc kubenswrapper[4767]: E1124 22:04:25.376712 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="extract-utilities" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.376718 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="extract-utilities" Nov 24 22:04:25 crc kubenswrapper[4767]: E1124 22:04:25.376745 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.376752 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.376931 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4e8bd7-4b90-4d32-b3f3-36011d7820bc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.376954 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="678c3464-36bc-405f-94ec-055138150037" containerName="registry-server" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.377644 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.379717 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.383412 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.383845 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.384262 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.384613 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz"] Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.555761 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.555838 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.556045 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmqn4\" (UniqueName: \"kubernetes.io/projected/b9c77001-7f38-42a1-9515-7fbe495d2577-kube-api-access-vmqn4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.657654 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.657759 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.657813 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmqn4\" (UniqueName: \"kubernetes.io/projected/b9c77001-7f38-42a1-9515-7fbe495d2577-kube-api-access-vmqn4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.664908 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.667679 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.677443 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmqn4\" (UniqueName: \"kubernetes.io/projected/b9c77001-7f38-42a1-9515-7fbe495d2577-kube-api-access-vmqn4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-67wdz\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:25 crc kubenswrapper[4767]: I1124 22:04:25.704068 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:04:26 crc kubenswrapper[4767]: I1124 22:04:26.288605 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz"] Nov 24 22:04:27 crc kubenswrapper[4767]: I1124 22:04:27.313238 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" event={"ID":"b9c77001-7f38-42a1-9515-7fbe495d2577","Type":"ContainerStarted","Data":"de0dafbeb2cf5faa3188e442b9782fd31b723bdf4a6e8933c35e2e1f98690f38"} Nov 24 22:04:27 crc kubenswrapper[4767]: I1124 22:04:27.313902 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" event={"ID":"b9c77001-7f38-42a1-9515-7fbe495d2577","Type":"ContainerStarted","Data":"1bc2966c6ed0c728518e56ca3f64f5f440724457e72ce25d1ddc1a6365de7ef4"} Nov 24 22:04:27 crc kubenswrapper[4767]: I1124 22:04:27.337308 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" podStartSLOduration=1.9152095839999999 podStartE2EDuration="2.337262821s" podCreationTimestamp="2025-11-24 22:04:25 +0000 UTC" firstStartedPulling="2025-11-24 22:04:26.291701409 +0000 UTC m=+1549.208684781" lastFinishedPulling="2025-11-24 22:04:26.713754646 +0000 UTC m=+1549.630738018" observedRunningTime="2025-11-24 22:04:27.331507958 +0000 UTC m=+1550.248491330" watchObservedRunningTime="2025-11-24 22:04:27.337262821 +0000 UTC m=+1550.254246193" Nov 24 22:04:35 crc kubenswrapper[4767]: I1124 22:04:35.314926 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:04:35 crc kubenswrapper[4767]: E1124 22:04:35.316239 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:04:40 crc kubenswrapper[4767]: I1124 22:04:40.996620 4767 scope.go:117] "RemoveContainer" containerID="2c024080de5204ac3bc7215f0d73eb073f34e6b703d85c6d7c8497d7235077bc" Nov 24 22:04:41 crc kubenswrapper[4767]: I1124 22:04:41.026767 4767 scope.go:117] "RemoveContainer" containerID="633ef015a6a75b60caa91dde44609ed958b624d31c001a4965f1df1fc435e86c" Nov 24 22:04:41 crc kubenswrapper[4767]: I1124 22:04:41.052324 4767 scope.go:117] "RemoveContainer" containerID="e376c9d935086674c61de6631e0a46a078f89c0c5db12143e76b4a78ae5e986f" Nov 24 22:04:42 crc kubenswrapper[4767]: I1124 22:04:42.061009 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2grc5"] Nov 24 22:04:42 crc kubenswrapper[4767]: I1124 22:04:42.072064 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9sh7l"] Nov 24 22:04:42 crc kubenswrapper[4767]: I1124 22:04:42.090187 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2grc5"] Nov 24 22:04:42 crc kubenswrapper[4767]: I1124 22:04:42.100635 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9sh7l"] Nov 24 22:04:42 crc kubenswrapper[4767]: I1124 22:04:42.335179 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0863e69a-b331-4647-a79c-d0a2e182f14d" path="/var/lib/kubelet/pods/0863e69a-b331-4647-a79c-d0a2e182f14d/volumes" Nov 24 22:04:42 crc kubenswrapper[4767]: I1124 22:04:42.339573 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec" path="/var/lib/kubelet/pods/ba6fc0ff-0fa5-4cba-b0bc-12e4f4bc8dec/volumes" Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.040112 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rcbjg"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.051125 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8d7e-account-create-vv26x"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.065899 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rcbjg"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.078646 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c148-account-create-dm25n"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.090934 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-48vjg"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.103288 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-358f-account-create-4kwkf"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.111283 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8d7e-account-create-vv26x"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.119046 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-358f-account-create-4kwkf"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.126561 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-48vjg"] Nov 24 22:04:43 crc kubenswrapper[4767]: I1124 22:04:43.135570 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c148-account-create-dm25n"] Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.038610 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-134c-account-create-bkvsg"] Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.050315 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-134c-account-create-bkvsg"] Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.333139 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a407b22-b744-42f8-9746-30f7b21c8e2b" path="/var/lib/kubelet/pods/2a407b22-b744-42f8-9746-30f7b21c8e2b/volumes" Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.336902 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef6da9a-e416-4a02-8507-1a4caabc88c6" path="/var/lib/kubelet/pods/4ef6da9a-e416-4a02-8507-1a4caabc88c6/volumes" Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.340095 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910" path="/var/lib/kubelet/pods/6045c07d-e6f9-4bd9-9a6e-e60f4b7b5910/volumes" Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.343351 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ce5857-f490-47b9-b07d-ecf4d1aa2045" path="/var/lib/kubelet/pods/88ce5857-f490-47b9-b07d-ecf4d1aa2045/volumes" Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.346208 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda" path="/var/lib/kubelet/pods/92bd7ac9-4d3e-4e41-8cc6-03fd71a99bda/volumes" Nov 24 22:04:44 crc kubenswrapper[4767]: I1124 22:04:44.349680 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9e387f-20cc-4618-915a-bf9a33b40ddd" path="/var/lib/kubelet/pods/cd9e387f-20cc-4618-915a-bf9a33b40ddd/volumes" Nov 24 22:04:47 crc kubenswrapper[4767]: I1124 22:04:47.313611 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:04:47 crc kubenswrapper[4767]: E1124 22:04:47.314137 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:05:01 crc kubenswrapper[4767]: I1124 22:05:01.313688 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:05:01 crc kubenswrapper[4767]: E1124 22:05:01.314491 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:05:10 crc kubenswrapper[4767]: I1124 22:05:10.056033 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2216-account-create-2hzgb"] Nov 24 22:05:10 crc kubenswrapper[4767]: I1124 22:05:10.067913 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0f65-account-create-h2qbg"] Nov 24 22:05:10 crc kubenswrapper[4767]: I1124 22:05:10.080865 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2216-account-create-2hzgb"] Nov 24 22:05:10 crc kubenswrapper[4767]: I1124 22:05:10.092721 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0f65-account-create-h2qbg"] Nov 24 22:05:10 crc kubenswrapper[4767]: I1124 22:05:10.326831 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f386a17-08d4-4c2d-8727-5171cb4275a5" path="/var/lib/kubelet/pods/0f386a17-08d4-4c2d-8727-5171cb4275a5/volumes" Nov 24 22:05:10 crc kubenswrapper[4767]: I1124 22:05:10.335249 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a72f88f-06d7-4a5f-b391-976efcc9ea67" path="/var/lib/kubelet/pods/5a72f88f-06d7-4a5f-b391-976efcc9ea67/volumes" Nov 24 22:05:12 crc kubenswrapper[4767]: I1124 22:05:12.313937 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:05:12 crc kubenswrapper[4767]: E1124 22:05:12.314629 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:05:17 crc kubenswrapper[4767]: I1124 22:05:17.042880 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-q7bzm"] Nov 24 22:05:17 crc kubenswrapper[4767]: I1124 22:05:17.062626 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-l8gk2"] Nov 24 22:05:17 crc kubenswrapper[4767]: I1124 22:05:17.070342 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8fbf-account-create-25zrr"] Nov 24 22:05:17 crc kubenswrapper[4767]: I1124 22:05:17.077871 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-q7bzm"] Nov 24 22:05:17 crc kubenswrapper[4767]: I1124 22:05:17.084888 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-l8gk2"] Nov 24 22:05:17 crc kubenswrapper[4767]: I1124 22:05:17.092391 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8fbf-account-create-25zrr"] Nov 24 22:05:18 crc kubenswrapper[4767]: I1124 22:05:18.328704 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3983d70b-b45a-4ee3-a9ef-988fa258635b" path="/var/lib/kubelet/pods/3983d70b-b45a-4ee3-a9ef-988fa258635b/volumes" Nov 24 22:05:18 crc kubenswrapper[4767]: I1124 22:05:18.331356 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5783bdd7-a5b2-4ba7-9aa5-505f01383747" path="/var/lib/kubelet/pods/5783bdd7-a5b2-4ba7-9aa5-505f01383747/volumes" Nov 24 22:05:18 crc kubenswrapper[4767]: I1124 22:05:18.333066 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab05c5db-4946-423d-8123-d76eaa3f716a" path="/var/lib/kubelet/pods/ab05c5db-4946-423d-8123-d76eaa3f716a/volumes" Nov 24 22:05:21 crc kubenswrapper[4767]: I1124 22:05:21.037523 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hjlg6"] Nov 24 22:05:21 crc kubenswrapper[4767]: I1124 22:05:21.062103 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hjlg6"] Nov 24 22:05:21 crc kubenswrapper[4767]: I1124 22:05:21.074411 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6dw5c"] Nov 24 22:05:21 crc kubenswrapper[4767]: I1124 22:05:21.082136 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6dw5c"] Nov 24 22:05:22 crc kubenswrapper[4767]: I1124 22:05:22.336090 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96621856-cbd1-4e79-a210-59cb502ba291" path="/var/lib/kubelet/pods/96621856-cbd1-4e79-a210-59cb502ba291/volumes" Nov 24 22:05:22 crc kubenswrapper[4767]: I1124 22:05:22.338865 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba84d81f-ea11-4c51-81a1-2edfd90b9144" path="/var/lib/kubelet/pods/ba84d81f-ea11-4c51-81a1-2edfd90b9144/volumes" Nov 24 22:05:25 crc kubenswrapper[4767]: I1124 22:05:25.313950 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:05:25 crc kubenswrapper[4767]: E1124 22:05:25.314590 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:05:26 crc kubenswrapper[4767]: I1124 22:05:26.046517 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-7wzxw"] Nov 24 22:05:26 crc kubenswrapper[4767]: I1124 22:05:26.054940 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-7wzxw"] Nov 24 22:05:26 crc kubenswrapper[4767]: I1124 22:05:26.326992 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d803aeed-f0af-4587-b58d-1e7e8273a21d" path="/var/lib/kubelet/pods/d803aeed-f0af-4587-b58d-1e7e8273a21d/volumes" Nov 24 22:05:38 crc kubenswrapper[4767]: I1124 22:05:38.320889 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:05:38 crc kubenswrapper[4767]: E1124 22:05:38.321912 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:05:40 crc kubenswrapper[4767]: I1124 22:05:40.059318 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-mj4wm"] Nov 24 22:05:40 crc kubenswrapper[4767]: I1124 22:05:40.069105 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-mj4wm"] Nov 24 22:05:40 crc kubenswrapper[4767]: I1124 22:05:40.329710 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134b8eee-26a9-42c6-adec-2ac29ee455ed" path="/var/lib/kubelet/pods/134b8eee-26a9-42c6-adec-2ac29ee455ed/volumes" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.148452 4767 scope.go:117] "RemoveContainer" containerID="1090b95d987f7a1ed0cf64ecf9ab93d603b564a39d35ea1fe16054568cbbd445" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.179785 4767 scope.go:117] "RemoveContainer" containerID="33f9aa175322eae694e4347f835d3a61ce610abcf2358b8f2b380f614d1b7f79" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.231190 4767 scope.go:117] "RemoveContainer" containerID="8c0ba9ef8e119586eed17fcd187e6e421c462d8180cb0db5134b19f1f6af7f3b" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.300468 4767 scope.go:117] "RemoveContainer" containerID="197e94a1f4a5772c03d3bbaa91156fc7a8eb52691a7c6cb5d23e26f534591f9c" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.330776 4767 scope.go:117] "RemoveContainer" containerID="d33a7c4841c736ab51634b28e03dfcbf0ebfd75f39c7c627851d89b8ad7ea51f" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.377678 4767 scope.go:117] "RemoveContainer" containerID="616dca5308755e8442e9b46ff10ad31fc5d023330a20b0b7c2234de4dfd44409" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.419397 4767 scope.go:117] "RemoveContainer" containerID="19796438dd1357f69a0b3d3d0895eec9b7adfcadbd8a8ad951fae7b36d6f06b0" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.438692 4767 scope.go:117] "RemoveContainer" containerID="2ec56342422b53837226f85e0e0d7e21d21742ef716f68bff45b4b2314bca895" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.460923 4767 scope.go:117] "RemoveContainer" containerID="527b49a76fd817305e9d545e14ee5cd6a34b82a4678630c60cb764c88d049326" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.487822 4767 scope.go:117] "RemoveContainer" containerID="8d2ec0fe14f7fea0a3cc95b384f4f5f3851e067b1383bdd149326708f1b1038e" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.517291 4767 scope.go:117] "RemoveContainer" containerID="42e8d69a9b7ae679681c36ea4306c85a14aa8a30600f4e43ce0863d683fb17f2" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.547713 4767 scope.go:117] "RemoveContainer" containerID="a1a803232349f2dc08bb28c94ad9c1d02cf71632fc8498e9d707290cd72cb2f2" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.595575 4767 scope.go:117] "RemoveContainer" containerID="b833bf0b41cb962f95dee8a4b67a1b3dfd1aecdcc88114f7b2f2b08bfa908533" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.619428 4767 scope.go:117] "RemoveContainer" containerID="ab586574f3248cde5b18ec034686b4f4f72bf6ee64a175292a85f0de931b3a7b" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.653351 4767 scope.go:117] "RemoveContainer" containerID="b9c06f9935a37f32def59b3dc1b5eecbc75dc1a47de5f0aeb0da629b1b23a0bf" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.687153 4767 scope.go:117] "RemoveContainer" containerID="0fdb3d324be3afeb8f665f4f6af799fd5b2e02d9080fe4f849eaea25ec631cfd" Nov 24 22:05:41 crc kubenswrapper[4767]: I1124 22:05:41.706009 4767 scope.go:117] "RemoveContainer" containerID="acc336df61ba28e5aaea71da2df7976f80c2cfa1176bed7636a5a824455ad4af" Nov 24 22:05:52 crc kubenswrapper[4767]: I1124 22:05:52.314128 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:05:52 crc kubenswrapper[4767]: E1124 22:05:52.316569 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:06:04 crc kubenswrapper[4767]: I1124 22:06:04.313674 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:06:04 crc kubenswrapper[4767]: E1124 22:06:04.315993 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:06:04 crc kubenswrapper[4767]: I1124 22:06:04.360106 4767 generic.go:334] "Generic (PLEG): container finished" podID="b9c77001-7f38-42a1-9515-7fbe495d2577" containerID="de0dafbeb2cf5faa3188e442b9782fd31b723bdf4a6e8933c35e2e1f98690f38" exitCode=0 Nov 24 22:06:04 crc kubenswrapper[4767]: I1124 22:06:04.360171 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" event={"ID":"b9c77001-7f38-42a1-9515-7fbe495d2577","Type":"ContainerDied","Data":"de0dafbeb2cf5faa3188e442b9782fd31b723bdf4a6e8933c35e2e1f98690f38"} Nov 24 22:06:05 crc kubenswrapper[4767]: I1124 22:06:05.822853 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:06:05 crc kubenswrapper[4767]: I1124 22:06:05.989499 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-ssh-key\") pod \"b9c77001-7f38-42a1-9515-7fbe495d2577\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " Nov 24 22:06:05 crc kubenswrapper[4767]: I1124 22:06:05.989603 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-inventory\") pod \"b9c77001-7f38-42a1-9515-7fbe495d2577\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " Nov 24 22:06:05 crc kubenswrapper[4767]: I1124 22:06:05.989755 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmqn4\" (UniqueName: \"kubernetes.io/projected/b9c77001-7f38-42a1-9515-7fbe495d2577-kube-api-access-vmqn4\") pod \"b9c77001-7f38-42a1-9515-7fbe495d2577\" (UID: \"b9c77001-7f38-42a1-9515-7fbe495d2577\") " Nov 24 22:06:05 crc kubenswrapper[4767]: I1124 22:06:05.996555 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c77001-7f38-42a1-9515-7fbe495d2577-kube-api-access-vmqn4" (OuterVolumeSpecName: "kube-api-access-vmqn4") pod "b9c77001-7f38-42a1-9515-7fbe495d2577" (UID: "b9c77001-7f38-42a1-9515-7fbe495d2577"). InnerVolumeSpecName "kube-api-access-vmqn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.016849 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-inventory" (OuterVolumeSpecName: "inventory") pod "b9c77001-7f38-42a1-9515-7fbe495d2577" (UID: "b9c77001-7f38-42a1-9515-7fbe495d2577"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.045546 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b9c77001-7f38-42a1-9515-7fbe495d2577" (UID: "b9c77001-7f38-42a1-9515-7fbe495d2577"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.058072 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gs6z2"] Nov 24 22:06:06 crc kubenswrapper[4767]: E1124 22:06:06.058529 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c77001-7f38-42a1-9515-7fbe495d2577" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.058553 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c77001-7f38-42a1-9515-7fbe495d2577" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.058795 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c77001-7f38-42a1-9515-7fbe495d2577" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.060251 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.074678 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gs6z2"] Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.092340 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.092366 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9c77001-7f38-42a1-9515-7fbe495d2577-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.092398 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmqn4\" (UniqueName: \"kubernetes.io/projected/b9c77001-7f38-42a1-9515-7fbe495d2577-kube-api-access-vmqn4\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.193768 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-utilities\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.194210 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-catalog-content\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.194331 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m6r\" (UniqueName: \"kubernetes.io/projected/f907c413-0fec-456e-ac9a-544ca1a63559-kube-api-access-29m6r\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.296313 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-utilities\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.296460 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-catalog-content\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.296489 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m6r\" (UniqueName: \"kubernetes.io/projected/f907c413-0fec-456e-ac9a-544ca1a63559-kube-api-access-29m6r\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.296900 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-utilities\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.297070 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-catalog-content\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.321096 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m6r\" (UniqueName: \"kubernetes.io/projected/f907c413-0fec-456e-ac9a-544ca1a63559-kube-api-access-29m6r\") pod \"community-operators-gs6z2\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.385329 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" event={"ID":"b9c77001-7f38-42a1-9515-7fbe495d2577","Type":"ContainerDied","Data":"1bc2966c6ed0c728518e56ca3f64f5f440724457e72ce25d1ddc1a6365de7ef4"} Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.385374 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc2966c6ed0c728518e56ca3f64f5f440724457e72ce25d1ddc1a6365de7ef4" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.385418 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-67wdz" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.436908 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.479088 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45"] Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.480288 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.504383 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.504597 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.504650 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.505317 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.521300 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45"] Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.610840 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.611199 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.611445 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bbxm\" (UniqueName: \"kubernetes.io/projected/b8b98bcc-b8b9-4846-9881-398282f309f1-kube-api-access-8bbxm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.713044 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.713120 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bbxm\" (UniqueName: \"kubernetes.io/projected/b8b98bcc-b8b9-4846-9881-398282f309f1-kube-api-access-8bbxm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.713242 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.718622 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.718844 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.730019 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bbxm\" (UniqueName: \"kubernetes.io/projected/b8b98bcc-b8b9-4846-9881-398282f309f1-kube-api-access-8bbxm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cjz45\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.829759 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:06:06 crc kubenswrapper[4767]: I1124 22:06:06.869379 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gs6z2"] Nov 24 22:06:07 crc kubenswrapper[4767]: I1124 22:06:07.333446 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45"] Nov 24 22:06:07 crc kubenswrapper[4767]: W1124 22:06:07.335490 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8b98bcc_b8b9_4846_9881_398282f309f1.slice/crio-f96eb2879f11dc55a2afb70ea1f2f55dd49f2dbb568b5d864aeed3fafcc56f18 WatchSource:0}: Error finding container f96eb2879f11dc55a2afb70ea1f2f55dd49f2dbb568b5d864aeed3fafcc56f18: Status 404 returned error can't find the container with id f96eb2879f11dc55a2afb70ea1f2f55dd49f2dbb568b5d864aeed3fafcc56f18 Nov 24 22:06:07 crc kubenswrapper[4767]: I1124 22:06:07.394152 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" event={"ID":"b8b98bcc-b8b9-4846-9881-398282f309f1","Type":"ContainerStarted","Data":"f96eb2879f11dc55a2afb70ea1f2f55dd49f2dbb568b5d864aeed3fafcc56f18"} Nov 24 22:06:07 crc kubenswrapper[4767]: I1124 22:06:07.396474 4767 generic.go:334] "Generic (PLEG): container finished" podID="f907c413-0fec-456e-ac9a-544ca1a63559" containerID="3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c" exitCode=0 Nov 24 22:06:07 crc kubenswrapper[4767]: I1124 22:06:07.396526 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs6z2" event={"ID":"f907c413-0fec-456e-ac9a-544ca1a63559","Type":"ContainerDied","Data":"3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c"} Nov 24 22:06:07 crc kubenswrapper[4767]: I1124 22:06:07.396557 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs6z2" event={"ID":"f907c413-0fec-456e-ac9a-544ca1a63559","Type":"ContainerStarted","Data":"344cf8449ba7f112edc6f7e4cf03bdf701017a8d93cdb4f1f018d5964b73c705"} Nov 24 22:06:08 crc kubenswrapper[4767]: I1124 22:06:08.406702 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" event={"ID":"b8b98bcc-b8b9-4846-9881-398282f309f1","Type":"ContainerStarted","Data":"251ca5ed40b2c0c13abdf764fa3c5c6db89ff5e4a9cfdaf789f0a335f05d762e"} Nov 24 22:06:08 crc kubenswrapper[4767]: I1124 22:06:08.413472 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs6z2" event={"ID":"f907c413-0fec-456e-ac9a-544ca1a63559","Type":"ContainerStarted","Data":"3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725"} Nov 24 22:06:08 crc kubenswrapper[4767]: I1124 22:06:08.435117 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" podStartSLOduration=1.880473114 podStartE2EDuration="2.435096498s" podCreationTimestamp="2025-11-24 22:06:06 +0000 UTC" firstStartedPulling="2025-11-24 22:06:07.338497829 +0000 UTC m=+1650.255481211" lastFinishedPulling="2025-11-24 22:06:07.893121183 +0000 UTC m=+1650.810104595" observedRunningTime="2025-11-24 22:06:08.424696543 +0000 UTC m=+1651.341679925" watchObservedRunningTime="2025-11-24 22:06:08.435096498 +0000 UTC m=+1651.352079870" Nov 24 22:06:09 crc kubenswrapper[4767]: I1124 22:06:09.422794 4767 generic.go:334] "Generic (PLEG): container finished" podID="f907c413-0fec-456e-ac9a-544ca1a63559" containerID="3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725" exitCode=0 Nov 24 22:06:09 crc kubenswrapper[4767]: I1124 22:06:09.422933 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs6z2" event={"ID":"f907c413-0fec-456e-ac9a-544ca1a63559","Type":"ContainerDied","Data":"3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725"} Nov 24 22:06:10 crc kubenswrapper[4767]: I1124 22:06:10.435506 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs6z2" event={"ID":"f907c413-0fec-456e-ac9a-544ca1a63559","Type":"ContainerStarted","Data":"9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4"} Nov 24 22:06:10 crc kubenswrapper[4767]: I1124 22:06:10.456654 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gs6z2" podStartSLOduration=2.057737553 podStartE2EDuration="4.456635645s" podCreationTimestamp="2025-11-24 22:06:06 +0000 UTC" firstStartedPulling="2025-11-24 22:06:07.397982856 +0000 UTC m=+1650.314966228" lastFinishedPulling="2025-11-24 22:06:09.796880948 +0000 UTC m=+1652.713864320" observedRunningTime="2025-11-24 22:06:10.451024315 +0000 UTC m=+1653.368007687" watchObservedRunningTime="2025-11-24 22:06:10.456635645 +0000 UTC m=+1653.373619017" Nov 24 22:06:12 crc kubenswrapper[4767]: I1124 22:06:12.052358 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lc8sg"] Nov 24 22:06:12 crc kubenswrapper[4767]: I1124 22:06:12.062568 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lc8sg"] Nov 24 22:06:12 crc kubenswrapper[4767]: I1124 22:06:12.334381 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92996c14-829b-4668-b74f-42e672f1b9b3" path="/var/lib/kubelet/pods/92996c14-829b-4668-b74f-42e672f1b9b3/volumes" Nov 24 22:06:16 crc kubenswrapper[4767]: I1124 22:06:16.437068 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:16 crc kubenswrapper[4767]: I1124 22:06:16.437717 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:16 crc kubenswrapper[4767]: I1124 22:06:16.482760 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:16 crc kubenswrapper[4767]: I1124 22:06:16.566382 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:16 crc kubenswrapper[4767]: I1124 22:06:16.736334 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gs6z2"] Nov 24 22:06:18 crc kubenswrapper[4767]: I1124 22:06:18.321812 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:06:18 crc kubenswrapper[4767]: E1124 22:06:18.322342 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:06:18 crc kubenswrapper[4767]: I1124 22:06:18.520752 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gs6z2" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="registry-server" containerID="cri-o://9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4" gracePeriod=2 Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.007518 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.170787 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-utilities\") pod \"f907c413-0fec-456e-ac9a-544ca1a63559\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.170944 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29m6r\" (UniqueName: \"kubernetes.io/projected/f907c413-0fec-456e-ac9a-544ca1a63559-kube-api-access-29m6r\") pod \"f907c413-0fec-456e-ac9a-544ca1a63559\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.170980 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-catalog-content\") pod \"f907c413-0fec-456e-ac9a-544ca1a63559\" (UID: \"f907c413-0fec-456e-ac9a-544ca1a63559\") " Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.172660 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-utilities" (OuterVolumeSpecName: "utilities") pod "f907c413-0fec-456e-ac9a-544ca1a63559" (UID: "f907c413-0fec-456e-ac9a-544ca1a63559"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.176783 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f907c413-0fec-456e-ac9a-544ca1a63559-kube-api-access-29m6r" (OuterVolumeSpecName: "kube-api-access-29m6r") pod "f907c413-0fec-456e-ac9a-544ca1a63559" (UID: "f907c413-0fec-456e-ac9a-544ca1a63559"). InnerVolumeSpecName "kube-api-access-29m6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.225972 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f907c413-0fec-456e-ac9a-544ca1a63559" (UID: "f907c413-0fec-456e-ac9a-544ca1a63559"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.273095 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.273129 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29m6r\" (UniqueName: \"kubernetes.io/projected/f907c413-0fec-456e-ac9a-544ca1a63559-kube-api-access-29m6r\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.273140 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907c413-0fec-456e-ac9a-544ca1a63559-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.534901 4767 generic.go:334] "Generic (PLEG): container finished" podID="f907c413-0fec-456e-ac9a-544ca1a63559" containerID="9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4" exitCode=0 Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.534953 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs6z2" event={"ID":"f907c413-0fec-456e-ac9a-544ca1a63559","Type":"ContainerDied","Data":"9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4"} Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.534980 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs6z2" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.535010 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs6z2" event={"ID":"f907c413-0fec-456e-ac9a-544ca1a63559","Type":"ContainerDied","Data":"344cf8449ba7f112edc6f7e4cf03bdf701017a8d93cdb4f1f018d5964b73c705"} Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.535036 4767 scope.go:117] "RemoveContainer" containerID="9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.578125 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gs6z2"] Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.583057 4767 scope.go:117] "RemoveContainer" containerID="3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.588550 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gs6z2"] Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.606433 4767 scope.go:117] "RemoveContainer" containerID="3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.647026 4767 scope.go:117] "RemoveContainer" containerID="9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4" Nov 24 22:06:19 crc kubenswrapper[4767]: E1124 22:06:19.647459 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4\": container with ID starting with 9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4 not found: ID does not exist" containerID="9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.647486 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4"} err="failed to get container status \"9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4\": rpc error: code = NotFound desc = could not find container \"9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4\": container with ID starting with 9eea8a4573bb619dc04f9434997f145f49e326c29f8faad0b4ed409bbdf6fbb4 not found: ID does not exist" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.647508 4767 scope.go:117] "RemoveContainer" containerID="3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725" Nov 24 22:06:19 crc kubenswrapper[4767]: E1124 22:06:19.647796 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725\": container with ID starting with 3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725 not found: ID does not exist" containerID="3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.647828 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725"} err="failed to get container status \"3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725\": rpc error: code = NotFound desc = could not find container \"3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725\": container with ID starting with 3cac0c285aaca63179b2e0c57b9d83c45a5e4a311888ae3b9e8c6cbdfed79725 not found: ID does not exist" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.647849 4767 scope.go:117] "RemoveContainer" containerID="3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c" Nov 24 22:06:19 crc kubenswrapper[4767]: E1124 22:06:19.648114 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c\": container with ID starting with 3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c not found: ID does not exist" containerID="3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c" Nov 24 22:06:19 crc kubenswrapper[4767]: I1124 22:06:19.648138 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c"} err="failed to get container status \"3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c\": rpc error: code = NotFound desc = could not find container \"3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c\": container with ID starting with 3dbae8899b57e87973a342d7965ff105a580defb44cde304954e5b7bbfda580c not found: ID does not exist" Nov 24 22:06:20 crc kubenswrapper[4767]: I1124 22:06:20.326564 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" path="/var/lib/kubelet/pods/f907c413-0fec-456e-ac9a-544ca1a63559/volumes" Nov 24 22:06:23 crc kubenswrapper[4767]: I1124 22:06:23.042375 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hd5nf"] Nov 24 22:06:23 crc kubenswrapper[4767]: I1124 22:06:23.052817 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hd5nf"] Nov 24 22:06:23 crc kubenswrapper[4767]: I1124 22:06:23.063236 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2k8wb"] Nov 24 22:06:23 crc kubenswrapper[4767]: I1124 22:06:23.072282 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2k8wb"] Nov 24 22:06:24 crc kubenswrapper[4767]: I1124 22:06:24.030354 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-r9fp5"] Nov 24 22:06:24 crc kubenswrapper[4767]: I1124 22:06:24.040344 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-r9fp5"] Nov 24 22:06:24 crc kubenswrapper[4767]: I1124 22:06:24.338655 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54aafebf-445c-4632-81c3-1f35b84a4ef7" path="/var/lib/kubelet/pods/54aafebf-445c-4632-81c3-1f35b84a4ef7/volumes" Nov 24 22:06:24 crc kubenswrapper[4767]: I1124 22:06:24.340100 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83eba727-cd44-4013-8ce3-5672f4f7f595" path="/var/lib/kubelet/pods/83eba727-cd44-4013-8ce3-5672f4f7f595/volumes" Nov 24 22:06:24 crc kubenswrapper[4767]: I1124 22:06:24.341487 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6b50ba-b398-4a5f-bfc0-fd909ddf2703" path="/var/lib/kubelet/pods/fd6b50ba-b398-4a5f-bfc0-fd909ddf2703/volumes" Nov 24 22:06:31 crc kubenswrapper[4767]: I1124 22:06:31.313901 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:06:31 crc kubenswrapper[4767]: E1124 22:06:31.314835 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:06:36 crc kubenswrapper[4767]: I1124 22:06:36.044991 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tzcqj"] Nov 24 22:06:36 crc kubenswrapper[4767]: I1124 22:06:36.055870 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tzcqj"] Nov 24 22:06:36 crc kubenswrapper[4767]: I1124 22:06:36.335175 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128eda36-f009-47c2-8939-73ec23da0d4c" path="/var/lib/kubelet/pods/128eda36-f009-47c2-8939-73ec23da0d4c/volumes" Nov 24 22:06:42 crc kubenswrapper[4767]: I1124 22:06:42.051333 4767 scope.go:117] "RemoveContainer" containerID="164fc379f0c8290b0e60bd9c89caa60822e3fe36fedd06083adb12c19c5e3408" Nov 24 22:06:42 crc kubenswrapper[4767]: I1124 22:06:42.107897 4767 scope.go:117] "RemoveContainer" containerID="f2b8544d895d08acb115bbcb716dc8d72b95ad8f72cf4551f1c82a0cd888ac92" Nov 24 22:06:42 crc kubenswrapper[4767]: I1124 22:06:42.176958 4767 scope.go:117] "RemoveContainer" containerID="8e867542fe555a1f1945719bc235e8831bf3a6cf4cdd520b509a373473312910" Nov 24 22:06:42 crc kubenswrapper[4767]: I1124 22:06:42.195445 4767 scope.go:117] "RemoveContainer" containerID="0c03e0e66a3599ba2b540ceb043b24be074b360e6c0b32d2722f8a0986479037" Nov 24 22:06:42 crc kubenswrapper[4767]: I1124 22:06:42.222589 4767 scope.go:117] "RemoveContainer" containerID="88543f88cdf848cca677fbf0f060eaf50179873c8d4a13f37c36e487327e2ea8" Nov 24 22:06:42 crc kubenswrapper[4767]: I1124 22:06:42.272701 4767 scope.go:117] "RemoveContainer" containerID="e60977c789ead8b141e42c27319cf77ce4315398c54b033209d9239eb062d0d4" Nov 24 22:06:42 crc kubenswrapper[4767]: I1124 22:06:42.346187 4767 scope.go:117] "RemoveContainer" containerID="c6077eefe513932d582450265152f3c67081179ac7058f906df212ff71de5323" Nov 24 22:06:43 crc kubenswrapper[4767]: I1124 22:06:43.313941 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:06:43 crc kubenswrapper[4767]: E1124 22:06:43.314437 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:06:55 crc kubenswrapper[4767]: I1124 22:06:55.313985 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:06:55 crc kubenswrapper[4767]: E1124 22:06:55.315105 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:07:09 crc kubenswrapper[4767]: I1124 22:07:09.312981 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:07:09 crc kubenswrapper[4767]: E1124 22:07:09.313901 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:07:22 crc kubenswrapper[4767]: I1124 22:07:22.124921 4767 generic.go:334] "Generic (PLEG): container finished" podID="b8b98bcc-b8b9-4846-9881-398282f309f1" containerID="251ca5ed40b2c0c13abdf764fa3c5c6db89ff5e4a9cfdaf789f0a335f05d762e" exitCode=0 Nov 24 22:07:22 crc kubenswrapper[4767]: I1124 22:07:22.125024 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" event={"ID":"b8b98bcc-b8b9-4846-9881-398282f309f1","Type":"ContainerDied","Data":"251ca5ed40b2c0c13abdf764fa3c5c6db89ff5e4a9cfdaf789f0a335f05d762e"} Nov 24 22:07:22 crc kubenswrapper[4767]: I1124 22:07:22.314153 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:07:22 crc kubenswrapper[4767]: E1124 22:07:22.314707 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.606451 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.664657 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-ssh-key\") pod \"b8b98bcc-b8b9-4846-9881-398282f309f1\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.664804 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-inventory\") pod \"b8b98bcc-b8b9-4846-9881-398282f309f1\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.664845 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bbxm\" (UniqueName: \"kubernetes.io/projected/b8b98bcc-b8b9-4846-9881-398282f309f1-kube-api-access-8bbxm\") pod \"b8b98bcc-b8b9-4846-9881-398282f309f1\" (UID: \"b8b98bcc-b8b9-4846-9881-398282f309f1\") " Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.669250 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b98bcc-b8b9-4846-9881-398282f309f1-kube-api-access-8bbxm" (OuterVolumeSpecName: "kube-api-access-8bbxm") pod "b8b98bcc-b8b9-4846-9881-398282f309f1" (UID: "b8b98bcc-b8b9-4846-9881-398282f309f1"). InnerVolumeSpecName "kube-api-access-8bbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.690469 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b8b98bcc-b8b9-4846-9881-398282f309f1" (UID: "b8b98bcc-b8b9-4846-9881-398282f309f1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.690799 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-inventory" (OuterVolumeSpecName: "inventory") pod "b8b98bcc-b8b9-4846-9881-398282f309f1" (UID: "b8b98bcc-b8b9-4846-9881-398282f309f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.767543 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.767573 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8b98bcc-b8b9-4846-9881-398282f309f1-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:23 crc kubenswrapper[4767]: I1124 22:07:23.767583 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bbxm\" (UniqueName: \"kubernetes.io/projected/b8b98bcc-b8b9-4846-9881-398282f309f1-kube-api-access-8bbxm\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.154408 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" event={"ID":"b8b98bcc-b8b9-4846-9881-398282f309f1","Type":"ContainerDied","Data":"f96eb2879f11dc55a2afb70ea1f2f55dd49f2dbb568b5d864aeed3fafcc56f18"} Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.154919 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f96eb2879f11dc55a2afb70ea1f2f55dd49f2dbb568b5d864aeed3fafcc56f18" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.154582 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cjz45" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.283844 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s"] Nov 24 22:07:24 crc kubenswrapper[4767]: E1124 22:07:24.284557 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="extract-utilities" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.284591 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="extract-utilities" Nov 24 22:07:24 crc kubenswrapper[4767]: E1124 22:07:24.284613 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="registry-server" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.284626 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="registry-server" Nov 24 22:07:24 crc kubenswrapper[4767]: E1124 22:07:24.284657 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="extract-content" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.284670 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="extract-content" Nov 24 22:07:24 crc kubenswrapper[4767]: E1124 22:07:24.284694 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b98bcc-b8b9-4846-9881-398282f309f1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.284708 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b98bcc-b8b9-4846-9881-398282f309f1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.285053 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b98bcc-b8b9-4846-9881-398282f309f1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.285110 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f907c413-0fec-456e-ac9a-544ca1a63559" containerName="registry-server" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.286264 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.288573 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.289393 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.289620 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.294345 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s"] Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.298831 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.381770 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.381907 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhjn\" (UniqueName: \"kubernetes.io/projected/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-kube-api-access-hbhjn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.381954 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.483246 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.483356 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhjn\" (UniqueName: \"kubernetes.io/projected/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-kube-api-access-hbhjn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.483424 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.489629 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.489987 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.506479 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhjn\" (UniqueName: \"kubernetes.io/projected/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-kube-api-access-hbhjn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:24 crc kubenswrapper[4767]: I1124 22:07:24.629844 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:25 crc kubenswrapper[4767]: I1124 22:07:25.191571 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s"] Nov 24 22:07:26 crc kubenswrapper[4767]: I1124 22:07:26.178113 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" event={"ID":"ce46524c-1a5f-4fb5-afc9-f3c46fa33135","Type":"ContainerStarted","Data":"b847faf15ca2f7d81001281fabc28672e8b7cb821be9a884de2570f3790bdacd"} Nov 24 22:07:26 crc kubenswrapper[4767]: I1124 22:07:26.178644 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" event={"ID":"ce46524c-1a5f-4fb5-afc9-f3c46fa33135","Type":"ContainerStarted","Data":"11c1457f78173cdab1054444a00f3e1933620ab93aeacb55f21c244d95439188"} Nov 24 22:07:26 crc kubenswrapper[4767]: I1124 22:07:26.205184 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" podStartSLOduration=1.6628776520000002 podStartE2EDuration="2.20516619s" podCreationTimestamp="2025-11-24 22:07:24 +0000 UTC" firstStartedPulling="2025-11-24 22:07:25.201916388 +0000 UTC m=+1728.118899760" lastFinishedPulling="2025-11-24 22:07:25.744204926 +0000 UTC m=+1728.661188298" observedRunningTime="2025-11-24 22:07:26.198081999 +0000 UTC m=+1729.115065381" watchObservedRunningTime="2025-11-24 22:07:26.20516619 +0000 UTC m=+1729.122149552" Nov 24 22:07:31 crc kubenswrapper[4767]: I1124 22:07:31.257917 4767 generic.go:334] "Generic (PLEG): container finished" podID="ce46524c-1a5f-4fb5-afc9-f3c46fa33135" containerID="b847faf15ca2f7d81001281fabc28672e8b7cb821be9a884de2570f3790bdacd" exitCode=0 Nov 24 22:07:31 crc kubenswrapper[4767]: I1124 22:07:31.258034 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" event={"ID":"ce46524c-1a5f-4fb5-afc9-f3c46fa33135","Type":"ContainerDied","Data":"b847faf15ca2f7d81001281fabc28672e8b7cb821be9a884de2570f3790bdacd"} Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.708521 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.774926 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-ssh-key\") pod \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.774972 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-inventory\") pod \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.775243 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhjn\" (UniqueName: \"kubernetes.io/projected/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-kube-api-access-hbhjn\") pod \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\" (UID: \"ce46524c-1a5f-4fb5-afc9-f3c46fa33135\") " Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.784863 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-kube-api-access-hbhjn" (OuterVolumeSpecName: "kube-api-access-hbhjn") pod "ce46524c-1a5f-4fb5-afc9-f3c46fa33135" (UID: "ce46524c-1a5f-4fb5-afc9-f3c46fa33135"). InnerVolumeSpecName "kube-api-access-hbhjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.805371 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-inventory" (OuterVolumeSpecName: "inventory") pod "ce46524c-1a5f-4fb5-afc9-f3c46fa33135" (UID: "ce46524c-1a5f-4fb5-afc9-f3c46fa33135"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.810002 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ce46524c-1a5f-4fb5-afc9-f3c46fa33135" (UID: "ce46524c-1a5f-4fb5-afc9-f3c46fa33135"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.878305 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.878354 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:32 crc kubenswrapper[4767]: I1124 22:07:32.878373 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhjn\" (UniqueName: \"kubernetes.io/projected/ce46524c-1a5f-4fb5-afc9-f3c46fa33135-kube-api-access-hbhjn\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.283603 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" event={"ID":"ce46524c-1a5f-4fb5-afc9-f3c46fa33135","Type":"ContainerDied","Data":"11c1457f78173cdab1054444a00f3e1933620ab93aeacb55f21c244d95439188"} Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.283643 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c1457f78173cdab1054444a00f3e1933620ab93aeacb55f21c244d95439188" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.283661 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.356078 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q"] Nov 24 22:07:33 crc kubenswrapper[4767]: E1124 22:07:33.356866 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce46524c-1a5f-4fb5-afc9-f3c46fa33135" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.356901 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce46524c-1a5f-4fb5-afc9-f3c46fa33135" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.357235 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce46524c-1a5f-4fb5-afc9-f3c46fa33135" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.358343 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.361050 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.361134 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.362123 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.362766 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.369609 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q"] Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.491367 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.491467 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.491551 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4h8v\" (UniqueName: \"kubernetes.io/projected/ee9d91d5-b6b0-4376-b65e-b211504121e8-kube-api-access-v4h8v\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.593043 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4h8v\" (UniqueName: \"kubernetes.io/projected/ee9d91d5-b6b0-4376-b65e-b211504121e8-kube-api-access-v4h8v\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.593200 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.593343 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.597640 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.599831 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.613769 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4h8v\" (UniqueName: \"kubernetes.io/projected/ee9d91d5-b6b0-4376-b65e-b211504121e8-kube-api-access-v4h8v\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lk86q\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:33 crc kubenswrapper[4767]: I1124 22:07:33.683058 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:07:34 crc kubenswrapper[4767]: W1124 22:07:34.291339 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee9d91d5_b6b0_4376_b65e_b211504121e8.slice/crio-9a8be44d9232eb1557595f2101dec4ddd02ad1220aeae4644832e7c5aea60143 WatchSource:0}: Error finding container 9a8be44d9232eb1557595f2101dec4ddd02ad1220aeae4644832e7c5aea60143: Status 404 returned error can't find the container with id 9a8be44d9232eb1557595f2101dec4ddd02ad1220aeae4644832e7c5aea60143 Nov 24 22:07:34 crc kubenswrapper[4767]: I1124 22:07:34.292881 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q"] Nov 24 22:07:35 crc kubenswrapper[4767]: I1124 22:07:35.304252 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" event={"ID":"ee9d91d5-b6b0-4376-b65e-b211504121e8","Type":"ContainerStarted","Data":"8e527acc12a4f4e579a96420d6cdc4be8d20db96b346914c8a30f9cd7c615341"} Nov 24 22:07:35 crc kubenswrapper[4767]: I1124 22:07:35.304340 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" event={"ID":"ee9d91d5-b6b0-4376-b65e-b211504121e8","Type":"ContainerStarted","Data":"9a8be44d9232eb1557595f2101dec4ddd02ad1220aeae4644832e7c5aea60143"} Nov 24 22:07:35 crc kubenswrapper[4767]: I1124 22:07:35.314247 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:07:35 crc kubenswrapper[4767]: E1124 22:07:35.314874 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:07:35 crc kubenswrapper[4767]: I1124 22:07:35.328682 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" podStartSLOduration=1.635255143 podStartE2EDuration="2.328654549s" podCreationTimestamp="2025-11-24 22:07:33 +0000 UTC" firstStartedPulling="2025-11-24 22:07:34.294976506 +0000 UTC m=+1737.211959878" lastFinishedPulling="2025-11-24 22:07:34.988375912 +0000 UTC m=+1737.905359284" observedRunningTime="2025-11-24 22:07:35.321097195 +0000 UTC m=+1738.238080607" watchObservedRunningTime="2025-11-24 22:07:35.328654549 +0000 UTC m=+1738.245637941" Nov 24 22:07:37 crc kubenswrapper[4767]: I1124 22:07:37.046308 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-69dfn"] Nov 24 22:07:37 crc kubenswrapper[4767]: I1124 22:07:37.058954 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-69dfn"] Nov 24 22:07:38 crc kubenswrapper[4767]: I1124 22:07:38.027026 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-586a-account-create-kr7s2"] Nov 24 22:07:38 crc kubenswrapper[4767]: I1124 22:07:38.034530 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-h6qt4"] Nov 24 22:07:38 crc kubenswrapper[4767]: I1124 22:07:38.043596 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-586a-account-create-kr7s2"] Nov 24 22:07:38 crc kubenswrapper[4767]: I1124 22:07:38.051006 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-h6qt4"] Nov 24 22:07:38 crc kubenswrapper[4767]: I1124 22:07:38.330288 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50135ea4-cbb7-47f5-ad9d-6c039017bc47" path="/var/lib/kubelet/pods/50135ea4-cbb7-47f5-ad9d-6c039017bc47/volumes" Nov 24 22:07:38 crc kubenswrapper[4767]: I1124 22:07:38.335422 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aed67a4-e908-4066-b288-5f37c332a247" path="/var/lib/kubelet/pods/5aed67a4-e908-4066-b288-5f37c332a247/volumes" Nov 24 22:07:38 crc kubenswrapper[4767]: I1124 22:07:38.337344 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dedacc6-c898-4425-908b-6e94ae7bdc7f" path="/var/lib/kubelet/pods/6dedacc6-c898-4425-908b-6e94ae7bdc7f/volumes" Nov 24 22:07:39 crc kubenswrapper[4767]: I1124 22:07:39.042966 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-978e-account-create-sfbws"] Nov 24 22:07:39 crc kubenswrapper[4767]: I1124 22:07:39.065877 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-edf6-account-create-9hkk9"] Nov 24 22:07:39 crc kubenswrapper[4767]: I1124 22:07:39.075690 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9c6sk"] Nov 24 22:07:39 crc kubenswrapper[4767]: I1124 22:07:39.083017 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9c6sk"] Nov 24 22:07:39 crc kubenswrapper[4767]: I1124 22:07:39.091595 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-edf6-account-create-9hkk9"] Nov 24 22:07:39 crc kubenswrapper[4767]: I1124 22:07:39.099577 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-978e-account-create-sfbws"] Nov 24 22:07:40 crc kubenswrapper[4767]: I1124 22:07:40.334910 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23042e8e-dbe2-4fa2-adda-ebd1b50512ec" path="/var/lib/kubelet/pods/23042e8e-dbe2-4fa2-adda-ebd1b50512ec/volumes" Nov 24 22:07:40 crc kubenswrapper[4767]: I1124 22:07:40.336320 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b153a138-74ce-4646-ae74-aba4aaa74152" path="/var/lib/kubelet/pods/b153a138-74ce-4646-ae74-aba4aaa74152/volumes" Nov 24 22:07:40 crc kubenswrapper[4767]: I1124 22:07:40.337776 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f" path="/var/lib/kubelet/pods/f7e2dc5b-82ce-4ce5-8fb5-b4e52232140f/volumes" Nov 24 22:07:42 crc kubenswrapper[4767]: I1124 22:07:42.497855 4767 scope.go:117] "RemoveContainer" containerID="027aca7d1aaad75210a05dca37006d5ffba5db063e9124177c610128aae07904" Nov 24 22:07:42 crc kubenswrapper[4767]: I1124 22:07:42.523671 4767 scope.go:117] "RemoveContainer" containerID="c13ee123e0ce6c93c3fe2b74b07f63baa0a9ee5c34376f6cc2b9266bcb70ce6b" Nov 24 22:07:42 crc kubenswrapper[4767]: I1124 22:07:42.570655 4767 scope.go:117] "RemoveContainer" containerID="2d2c973664ce878aa4fcc964244344d9a4f82869ff14be8ad42d2477b13d0f3d" Nov 24 22:07:42 crc kubenswrapper[4767]: I1124 22:07:42.607642 4767 scope.go:117] "RemoveContainer" containerID="3d3f7e830cb13ff4e4ed92e28364f123ba44a047edc9e3c106582193c50a97dd" Nov 24 22:07:42 crc kubenswrapper[4767]: I1124 22:07:42.662041 4767 scope.go:117] "RemoveContainer" containerID="032b605b6005bc68182099ff68a621cf37c89558022e15ec22a5792693c08d72" Nov 24 22:07:42 crc kubenswrapper[4767]: I1124 22:07:42.693820 4767 scope.go:117] "RemoveContainer" containerID="f7bd322832692386e3a702e96fc9bd66d1c4ffb6fe48047207f8125835152664" Nov 24 22:07:48 crc kubenswrapper[4767]: I1124 22:07:48.319863 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:07:48 crc kubenswrapper[4767]: E1124 22:07:48.321709 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:08:02 crc kubenswrapper[4767]: I1124 22:08:02.035099 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qb7jt"] Nov 24 22:08:02 crc kubenswrapper[4767]: I1124 22:08:02.049714 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qb7jt"] Nov 24 22:08:02 crc kubenswrapper[4767]: I1124 22:08:02.315674 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:08:02 crc kubenswrapper[4767]: E1124 22:08:02.316010 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:08:02 crc kubenswrapper[4767]: I1124 22:08:02.328644 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf29d2a6-46ff-45c9-8da3-12d043fd287d" path="/var/lib/kubelet/pods/bf29d2a6-46ff-45c9-8da3-12d043fd287d/volumes" Nov 24 22:08:17 crc kubenswrapper[4767]: I1124 22:08:17.313537 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:08:17 crc kubenswrapper[4767]: E1124 22:08:17.314361 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:08:19 crc kubenswrapper[4767]: I1124 22:08:19.796228 4767 generic.go:334] "Generic (PLEG): container finished" podID="ee9d91d5-b6b0-4376-b65e-b211504121e8" containerID="8e527acc12a4f4e579a96420d6cdc4be8d20db96b346914c8a30f9cd7c615341" exitCode=0 Nov 24 22:08:19 crc kubenswrapper[4767]: I1124 22:08:19.796351 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" event={"ID":"ee9d91d5-b6b0-4376-b65e-b211504121e8","Type":"ContainerDied","Data":"8e527acc12a4f4e579a96420d6cdc4be8d20db96b346914c8a30f9cd7c615341"} Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.304558 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.390983 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4h8v\" (UniqueName: \"kubernetes.io/projected/ee9d91d5-b6b0-4376-b65e-b211504121e8-kube-api-access-v4h8v\") pod \"ee9d91d5-b6b0-4376-b65e-b211504121e8\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.391145 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-ssh-key\") pod \"ee9d91d5-b6b0-4376-b65e-b211504121e8\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.391188 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-inventory\") pod \"ee9d91d5-b6b0-4376-b65e-b211504121e8\" (UID: \"ee9d91d5-b6b0-4376-b65e-b211504121e8\") " Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.397560 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9d91d5-b6b0-4376-b65e-b211504121e8-kube-api-access-v4h8v" (OuterVolumeSpecName: "kube-api-access-v4h8v") pod "ee9d91d5-b6b0-4376-b65e-b211504121e8" (UID: "ee9d91d5-b6b0-4376-b65e-b211504121e8"). InnerVolumeSpecName "kube-api-access-v4h8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.426543 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ee9d91d5-b6b0-4376-b65e-b211504121e8" (UID: "ee9d91d5-b6b0-4376-b65e-b211504121e8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.428827 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-inventory" (OuterVolumeSpecName: "inventory") pod "ee9d91d5-b6b0-4376-b65e-b211504121e8" (UID: "ee9d91d5-b6b0-4376-b65e-b211504121e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.494002 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4h8v\" (UniqueName: \"kubernetes.io/projected/ee9d91d5-b6b0-4376-b65e-b211504121e8-kube-api-access-v4h8v\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.494041 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.494054 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee9d91d5-b6b0-4376-b65e-b211504121e8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.818885 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" event={"ID":"ee9d91d5-b6b0-4376-b65e-b211504121e8","Type":"ContainerDied","Data":"9a8be44d9232eb1557595f2101dec4ddd02ad1220aeae4644832e7c5aea60143"} Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.818931 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a8be44d9232eb1557595f2101dec4ddd02ad1220aeae4644832e7c5aea60143" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.819023 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lk86q" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.952533 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d"] Nov 24 22:08:21 crc kubenswrapper[4767]: E1124 22:08:21.953526 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9d91d5-b6b0-4376-b65e-b211504121e8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.953564 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9d91d5-b6b0-4376-b65e-b211504121e8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.954038 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9d91d5-b6b0-4376-b65e-b211504121e8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.955740 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.958555 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.958731 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.960121 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.966147 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d"] Nov 24 22:08:21 crc kubenswrapper[4767]: I1124 22:08:21.968251 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.007662 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.007965 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.008113 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwgrm\" (UniqueName: \"kubernetes.io/projected/5add018b-72c6-4331-84df-96eac612f7fe-kube-api-access-jwgrm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.109240 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwgrm\" (UniqueName: \"kubernetes.io/projected/5add018b-72c6-4331-84df-96eac612f7fe-kube-api-access-jwgrm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.109380 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.109560 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.115220 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.116166 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.137712 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwgrm\" (UniqueName: \"kubernetes.io/projected/5add018b-72c6-4331-84df-96eac612f7fe-kube-api-access-jwgrm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8l87d\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.282187 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:08:22 crc kubenswrapper[4767]: I1124 22:08:22.856415 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d"] Nov 24 22:08:23 crc kubenswrapper[4767]: I1124 22:08:23.841649 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" event={"ID":"5add018b-72c6-4331-84df-96eac612f7fe","Type":"ContainerStarted","Data":"2a53ca218326856852e11edd3d925fa3f9ed6b2d139adc8efecb5701827ca9a6"} Nov 24 22:08:23 crc kubenswrapper[4767]: I1124 22:08:23.842344 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" event={"ID":"5add018b-72c6-4331-84df-96eac612f7fe","Type":"ContainerStarted","Data":"9984d119e6f37d9db049510129af985dfb97555ce73182ec068bf77e0150dc47"} Nov 24 22:08:23 crc kubenswrapper[4767]: I1124 22:08:23.873703 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" podStartSLOduration=2.438485584 podStartE2EDuration="2.873671611s" podCreationTimestamp="2025-11-24 22:08:21 +0000 UTC" firstStartedPulling="2025-11-24 22:08:22.857845976 +0000 UTC m=+1785.774829388" lastFinishedPulling="2025-11-24 22:08:23.293032033 +0000 UTC m=+1786.210015415" observedRunningTime="2025-11-24 22:08:23.856244117 +0000 UTC m=+1786.773227489" watchObservedRunningTime="2025-11-24 22:08:23.873671611 +0000 UTC m=+1786.790655013" Nov 24 22:08:26 crc kubenswrapper[4767]: I1124 22:08:26.050928 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7pqm2"] Nov 24 22:08:26 crc kubenswrapper[4767]: I1124 22:08:26.064986 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7pqm2"] Nov 24 22:08:26 crc kubenswrapper[4767]: I1124 22:08:26.076319 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5m85d"] Nov 24 22:08:26 crc kubenswrapper[4767]: I1124 22:08:26.086551 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5m85d"] Nov 24 22:08:26 crc kubenswrapper[4767]: I1124 22:08:26.332005 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04caedcb-53f5-42d5-9161-850f38541c06" path="/var/lib/kubelet/pods/04caedcb-53f5-42d5-9161-850f38541c06/volumes" Nov 24 22:08:26 crc kubenswrapper[4767]: I1124 22:08:26.332750 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0" path="/var/lib/kubelet/pods/cbd5b2c4-c8d4-450e-abe1-e6caed7a53f0/volumes" Nov 24 22:08:28 crc kubenswrapper[4767]: I1124 22:08:28.319284 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:08:28 crc kubenswrapper[4767]: E1124 22:08:28.319969 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:08:42 crc kubenswrapper[4767]: I1124 22:08:42.885765 4767 scope.go:117] "RemoveContainer" containerID="e01c9f961ec22e08c4c0d7fbc846695049ed620091c6fd003e6faca82305f6fe" Nov 24 22:08:42 crc kubenswrapper[4767]: I1124 22:08:42.936262 4767 scope.go:117] "RemoveContainer" containerID="293c137cc66435b7cac810cc7a19f066e20365fd15842efb5c17f85ea0fdd8cd" Nov 24 22:08:43 crc kubenswrapper[4767]: I1124 22:08:43.001384 4767 scope.go:117] "RemoveContainer" containerID="4d681b6ee97b4e2ec2c7c2a6f9c1d4f4b136be0ada0a441a05165ba674b226c5" Nov 24 22:08:43 crc kubenswrapper[4767]: I1124 22:08:43.312822 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:08:44 crc kubenswrapper[4767]: I1124 22:08:44.041187 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"5212ee13f9ec884476c9d08510699ab10c1815cd84c7d59fe73ece4597feed64"} Nov 24 22:09:11 crc kubenswrapper[4767]: I1124 22:09:11.051725 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-7rbl9"] Nov 24 22:09:11 crc kubenswrapper[4767]: I1124 22:09:11.059626 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-7rbl9"] Nov 24 22:09:12 crc kubenswrapper[4767]: I1124 22:09:12.335053 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4490c175-4526-4747-a9f3-72d5a757cda9" path="/var/lib/kubelet/pods/4490c175-4526-4747-a9f3-72d5a757cda9/volumes" Nov 24 22:09:22 crc kubenswrapper[4767]: I1124 22:09:22.437680 4767 generic.go:334] "Generic (PLEG): container finished" podID="5add018b-72c6-4331-84df-96eac612f7fe" containerID="2a53ca218326856852e11edd3d925fa3f9ed6b2d139adc8efecb5701827ca9a6" exitCode=0 Nov 24 22:09:22 crc kubenswrapper[4767]: I1124 22:09:22.437771 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" event={"ID":"5add018b-72c6-4331-84df-96eac612f7fe","Type":"ContainerDied","Data":"2a53ca218326856852e11edd3d925fa3f9ed6b2d139adc8efecb5701827ca9a6"} Nov 24 22:09:23 crc kubenswrapper[4767]: I1124 22:09:23.901345 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:09:23 crc kubenswrapper[4767]: I1124 22:09:23.992141 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-ssh-key\") pod \"5add018b-72c6-4331-84df-96eac612f7fe\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " Nov 24 22:09:23 crc kubenswrapper[4767]: I1124 22:09:23.992444 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwgrm\" (UniqueName: \"kubernetes.io/projected/5add018b-72c6-4331-84df-96eac612f7fe-kube-api-access-jwgrm\") pod \"5add018b-72c6-4331-84df-96eac612f7fe\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " Nov 24 22:09:23 crc kubenswrapper[4767]: I1124 22:09:23.992603 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-inventory\") pod \"5add018b-72c6-4331-84df-96eac612f7fe\" (UID: \"5add018b-72c6-4331-84df-96eac612f7fe\") " Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.005461 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5add018b-72c6-4331-84df-96eac612f7fe-kube-api-access-jwgrm" (OuterVolumeSpecName: "kube-api-access-jwgrm") pod "5add018b-72c6-4331-84df-96eac612f7fe" (UID: "5add018b-72c6-4331-84df-96eac612f7fe"). InnerVolumeSpecName "kube-api-access-jwgrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.043481 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5add018b-72c6-4331-84df-96eac612f7fe" (UID: "5add018b-72c6-4331-84df-96eac612f7fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.045835 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-inventory" (OuterVolumeSpecName: "inventory") pod "5add018b-72c6-4331-84df-96eac612f7fe" (UID: "5add018b-72c6-4331-84df-96eac612f7fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.095500 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.095544 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5add018b-72c6-4331-84df-96eac612f7fe-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.095558 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwgrm\" (UniqueName: \"kubernetes.io/projected/5add018b-72c6-4331-84df-96eac612f7fe-kube-api-access-jwgrm\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.464934 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" event={"ID":"5add018b-72c6-4331-84df-96eac612f7fe","Type":"ContainerDied","Data":"9984d119e6f37d9db049510129af985dfb97555ce73182ec068bf77e0150dc47"} Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.464974 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9984d119e6f37d9db049510129af985dfb97555ce73182ec068bf77e0150dc47" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.465091 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8l87d" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.582377 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jf5rl"] Nov 24 22:09:24 crc kubenswrapper[4767]: E1124 22:09:24.583071 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5add018b-72c6-4331-84df-96eac612f7fe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.583093 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5add018b-72c6-4331-84df-96eac612f7fe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.583422 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5add018b-72c6-4331-84df-96eac612f7fe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.584323 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.586519 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.587255 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.587355 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.587632 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.603467 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jf5rl"] Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.708788 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.708858 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjdlz\" (UniqueName: \"kubernetes.io/projected/70cec17a-2bbb-4bf2-9236-5848efc6689c-kube-api-access-qjdlz\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.709020 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.811489 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.811534 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjdlz\" (UniqueName: \"kubernetes.io/projected/70cec17a-2bbb-4bf2-9236-5848efc6689c-kube-api-access-qjdlz\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.811595 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.818122 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.818796 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.835740 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjdlz\" (UniqueName: \"kubernetes.io/projected/70cec17a-2bbb-4bf2-9236-5848efc6689c-kube-api-access-qjdlz\") pod \"ssh-known-hosts-edpm-deployment-jf5rl\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:24 crc kubenswrapper[4767]: I1124 22:09:24.908826 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:25 crc kubenswrapper[4767]: I1124 22:09:25.477352 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jf5rl"] Nov 24 22:09:25 crc kubenswrapper[4767]: W1124 22:09:25.479612 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70cec17a_2bbb_4bf2_9236_5848efc6689c.slice/crio-4c136246acba474953e7dd2f54bd4a70718bd2bc8c1b8b241fd8ac3e66a4bf75 WatchSource:0}: Error finding container 4c136246acba474953e7dd2f54bd4a70718bd2bc8c1b8b241fd8ac3e66a4bf75: Status 404 returned error can't find the container with id 4c136246acba474953e7dd2f54bd4a70718bd2bc8c1b8b241fd8ac3e66a4bf75 Nov 24 22:09:25 crc kubenswrapper[4767]: I1124 22:09:25.488238 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:09:26 crc kubenswrapper[4767]: I1124 22:09:26.496188 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" event={"ID":"70cec17a-2bbb-4bf2-9236-5848efc6689c","Type":"ContainerStarted","Data":"4c136246acba474953e7dd2f54bd4a70718bd2bc8c1b8b241fd8ac3e66a4bf75"} Nov 24 22:09:27 crc kubenswrapper[4767]: I1124 22:09:27.506957 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" event={"ID":"70cec17a-2bbb-4bf2-9236-5848efc6689c","Type":"ContainerStarted","Data":"5622a747bfd82af9f39013afd301bfc7a51c3b232a603f053f315ef9f13f042d"} Nov 24 22:09:27 crc kubenswrapper[4767]: I1124 22:09:27.532661 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" podStartSLOduration=2.417775592 podStartE2EDuration="3.532621834s" podCreationTimestamp="2025-11-24 22:09:24 +0000 UTC" firstStartedPulling="2025-11-24 22:09:25.48792772 +0000 UTC m=+1848.404911102" lastFinishedPulling="2025-11-24 22:09:26.602773932 +0000 UTC m=+1849.519757344" observedRunningTime="2025-11-24 22:09:27.530661118 +0000 UTC m=+1850.447644500" watchObservedRunningTime="2025-11-24 22:09:27.532621834 +0000 UTC m=+1850.449605226" Nov 24 22:09:35 crc kubenswrapper[4767]: I1124 22:09:35.587760 4767 generic.go:334] "Generic (PLEG): container finished" podID="70cec17a-2bbb-4bf2-9236-5848efc6689c" containerID="5622a747bfd82af9f39013afd301bfc7a51c3b232a603f053f315ef9f13f042d" exitCode=0 Nov 24 22:09:35 crc kubenswrapper[4767]: I1124 22:09:35.587834 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" event={"ID":"70cec17a-2bbb-4bf2-9236-5848efc6689c","Type":"ContainerDied","Data":"5622a747bfd82af9f39013afd301bfc7a51c3b232a603f053f315ef9f13f042d"} Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.040106 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.183081 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-inventory-0\") pod \"70cec17a-2bbb-4bf2-9236-5848efc6689c\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.183336 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjdlz\" (UniqueName: \"kubernetes.io/projected/70cec17a-2bbb-4bf2-9236-5848efc6689c-kube-api-access-qjdlz\") pod \"70cec17a-2bbb-4bf2-9236-5848efc6689c\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.183397 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-ssh-key-openstack-edpm-ipam\") pod \"70cec17a-2bbb-4bf2-9236-5848efc6689c\" (UID: \"70cec17a-2bbb-4bf2-9236-5848efc6689c\") " Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.191091 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70cec17a-2bbb-4bf2-9236-5848efc6689c-kube-api-access-qjdlz" (OuterVolumeSpecName: "kube-api-access-qjdlz") pod "70cec17a-2bbb-4bf2-9236-5848efc6689c" (UID: "70cec17a-2bbb-4bf2-9236-5848efc6689c"). InnerVolumeSpecName "kube-api-access-qjdlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.213773 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70cec17a-2bbb-4bf2-9236-5848efc6689c" (UID: "70cec17a-2bbb-4bf2-9236-5848efc6689c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.218831 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "70cec17a-2bbb-4bf2-9236-5848efc6689c" (UID: "70cec17a-2bbb-4bf2-9236-5848efc6689c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.287467 4767 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.287533 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjdlz\" (UniqueName: \"kubernetes.io/projected/70cec17a-2bbb-4bf2-9236-5848efc6689c-kube-api-access-qjdlz\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.287559 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70cec17a-2bbb-4bf2-9236-5848efc6689c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.611156 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" event={"ID":"70cec17a-2bbb-4bf2-9236-5848efc6689c","Type":"ContainerDied","Data":"4c136246acba474953e7dd2f54bd4a70718bd2bc8c1b8b241fd8ac3e66a4bf75"} Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.611617 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c136246acba474953e7dd2f54bd4a70718bd2bc8c1b8b241fd8ac3e66a4bf75" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.611225 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jf5rl" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.682018 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44"] Nov 24 22:09:37 crc kubenswrapper[4767]: E1124 22:09:37.682548 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70cec17a-2bbb-4bf2-9236-5848efc6689c" containerName="ssh-known-hosts-edpm-deployment" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.682574 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="70cec17a-2bbb-4bf2-9236-5848efc6689c" containerName="ssh-known-hosts-edpm-deployment" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.682815 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="70cec17a-2bbb-4bf2-9236-5848efc6689c" containerName="ssh-known-hosts-edpm-deployment" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.683661 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.685156 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.685224 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.685768 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.685806 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.690830 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44"] Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.694519 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.695124 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.797848 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpq62\" (UniqueName: \"kubernetes.io/projected/0757ad1e-fda9-4955-8b22-4de26be15b37-kube-api-access-fpq62\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.798116 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.798323 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.802556 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.802602 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.900877 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpq62\" (UniqueName: \"kubernetes.io/projected/0757ad1e-fda9-4955-8b22-4de26be15b37-kube-api-access-fpq62\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:37 crc kubenswrapper[4767]: I1124 22:09:37.917871 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpq62\" (UniqueName: \"kubernetes.io/projected/0757ad1e-fda9-4955-8b22-4de26be15b37-kube-api-access-fpq62\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-44g44\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:38 crc kubenswrapper[4767]: I1124 22:09:38.013027 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:38 crc kubenswrapper[4767]: I1124 22:09:38.666503 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44"] Nov 24 22:09:38 crc kubenswrapper[4767]: W1124 22:09:38.676463 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0757ad1e_fda9_4955_8b22_4de26be15b37.slice/crio-375068a04c1a98e4811db7bb00065dbf17862f901bda509543b4bf608fef26e6 WatchSource:0}: Error finding container 375068a04c1a98e4811db7bb00065dbf17862f901bda509543b4bf608fef26e6: Status 404 returned error can't find the container with id 375068a04c1a98e4811db7bb00065dbf17862f901bda509543b4bf608fef26e6 Nov 24 22:09:39 crc kubenswrapper[4767]: I1124 22:09:39.131585 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:09:39 crc kubenswrapper[4767]: I1124 22:09:39.632742 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" event={"ID":"0757ad1e-fda9-4955-8b22-4de26be15b37","Type":"ContainerStarted","Data":"0a3813e4744300139ff215e557c8ff645f2d58b1db7fcff3bf1877f87d688c07"} Nov 24 22:09:39 crc kubenswrapper[4767]: I1124 22:09:39.633147 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" event={"ID":"0757ad1e-fda9-4955-8b22-4de26be15b37","Type":"ContainerStarted","Data":"375068a04c1a98e4811db7bb00065dbf17862f901bda509543b4bf608fef26e6"} Nov 24 22:09:39 crc kubenswrapper[4767]: I1124 22:09:39.648622 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" podStartSLOduration=2.2099793 podStartE2EDuration="2.64860547s" podCreationTimestamp="2025-11-24 22:09:37 +0000 UTC" firstStartedPulling="2025-11-24 22:09:38.68886972 +0000 UTC m=+1861.605853092" lastFinishedPulling="2025-11-24 22:09:39.12749589 +0000 UTC m=+1862.044479262" observedRunningTime="2025-11-24 22:09:39.64717227 +0000 UTC m=+1862.564155642" watchObservedRunningTime="2025-11-24 22:09:39.64860547 +0000 UTC m=+1862.565588842" Nov 24 22:09:43 crc kubenswrapper[4767]: I1124 22:09:43.082601 4767 scope.go:117] "RemoveContainer" containerID="4ce87428b6b914247bdb63237497193e1ee33b90a0c29370a2f8e98dd8342a21" Nov 24 22:09:48 crc kubenswrapper[4767]: I1124 22:09:48.748793 4767 generic.go:334] "Generic (PLEG): container finished" podID="0757ad1e-fda9-4955-8b22-4de26be15b37" containerID="0a3813e4744300139ff215e557c8ff645f2d58b1db7fcff3bf1877f87d688c07" exitCode=0 Nov 24 22:09:48 crc kubenswrapper[4767]: I1124 22:09:48.748883 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" event={"ID":"0757ad1e-fda9-4955-8b22-4de26be15b37","Type":"ContainerDied","Data":"0a3813e4744300139ff215e557c8ff645f2d58b1db7fcff3bf1877f87d688c07"} Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.199525 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.359119 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-inventory\") pod \"0757ad1e-fda9-4955-8b22-4de26be15b37\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.359177 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-ssh-key\") pod \"0757ad1e-fda9-4955-8b22-4de26be15b37\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.359377 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpq62\" (UniqueName: \"kubernetes.io/projected/0757ad1e-fda9-4955-8b22-4de26be15b37-kube-api-access-fpq62\") pod \"0757ad1e-fda9-4955-8b22-4de26be15b37\" (UID: \"0757ad1e-fda9-4955-8b22-4de26be15b37\") " Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.390470 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0757ad1e-fda9-4955-8b22-4de26be15b37-kube-api-access-fpq62" (OuterVolumeSpecName: "kube-api-access-fpq62") pod "0757ad1e-fda9-4955-8b22-4de26be15b37" (UID: "0757ad1e-fda9-4955-8b22-4de26be15b37"). InnerVolumeSpecName "kube-api-access-fpq62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.409702 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-inventory" (OuterVolumeSpecName: "inventory") pod "0757ad1e-fda9-4955-8b22-4de26be15b37" (UID: "0757ad1e-fda9-4955-8b22-4de26be15b37"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.461561 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpq62\" (UniqueName: \"kubernetes.io/projected/0757ad1e-fda9-4955-8b22-4de26be15b37-kube-api-access-fpq62\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.461596 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.495471 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0757ad1e-fda9-4955-8b22-4de26be15b37" (UID: "0757ad1e-fda9-4955-8b22-4de26be15b37"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.563702 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0757ad1e-fda9-4955-8b22-4de26be15b37-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.767604 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" event={"ID":"0757ad1e-fda9-4955-8b22-4de26be15b37","Type":"ContainerDied","Data":"375068a04c1a98e4811db7bb00065dbf17862f901bda509543b4bf608fef26e6"} Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.767880 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="375068a04c1a98e4811db7bb00065dbf17862f901bda509543b4bf608fef26e6" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.767644 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-44g44" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.840413 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk"] Nov 24 22:09:50 crc kubenswrapper[4767]: E1124 22:09:50.840882 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0757ad1e-fda9-4955-8b22-4de26be15b37" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.840907 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0757ad1e-fda9-4955-8b22-4de26be15b37" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.841180 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0757ad1e-fda9-4955-8b22-4de26be15b37" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.841898 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.844620 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.844819 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.845087 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.845400 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.849854 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk"] Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.972107 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjsv5\" (UniqueName: \"kubernetes.io/projected/9ee9a8bf-0bd8-49fc-8421-1805014adfac-kube-api-access-xjsv5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.972559 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:50 crc kubenswrapper[4767]: I1124 22:09:50.972842 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.075752 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjsv5\" (UniqueName: \"kubernetes.io/projected/9ee9a8bf-0bd8-49fc-8421-1805014adfac-kube-api-access-xjsv5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.075878 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.075976 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.082187 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.083016 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.094963 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjsv5\" (UniqueName: \"kubernetes.io/projected/9ee9a8bf-0bd8-49fc-8421-1805014adfac-kube-api-access-xjsv5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.162130 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.724082 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk"] Nov 24 22:09:51 crc kubenswrapper[4767]: I1124 22:09:51.776665 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" event={"ID":"9ee9a8bf-0bd8-49fc-8421-1805014adfac","Type":"ContainerStarted","Data":"cc44b008e3239efa8c327ab8b272c14bd280874f0cacdf6da3e1a1f6057ede58"} Nov 24 22:09:53 crc kubenswrapper[4767]: I1124 22:09:53.795289 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" event={"ID":"9ee9a8bf-0bd8-49fc-8421-1805014adfac","Type":"ContainerStarted","Data":"7d9c0dda30b8dfeb00ab46637965e454cd5cd222ad82010a4cefa41280304187"} Nov 24 22:09:53 crc kubenswrapper[4767]: I1124 22:09:53.812466 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" podStartSLOduration=2.8860404859999997 podStartE2EDuration="3.81244965s" podCreationTimestamp="2025-11-24 22:09:50 +0000 UTC" firstStartedPulling="2025-11-24 22:09:51.735961303 +0000 UTC m=+1874.652944685" lastFinishedPulling="2025-11-24 22:09:52.662370477 +0000 UTC m=+1875.579353849" observedRunningTime="2025-11-24 22:09:53.807788868 +0000 UTC m=+1876.724772260" watchObservedRunningTime="2025-11-24 22:09:53.81244965 +0000 UTC m=+1876.729433022" Nov 24 22:10:02 crc kubenswrapper[4767]: I1124 22:10:02.886317 4767 generic.go:334] "Generic (PLEG): container finished" podID="9ee9a8bf-0bd8-49fc-8421-1805014adfac" containerID="7d9c0dda30b8dfeb00ab46637965e454cd5cd222ad82010a4cefa41280304187" exitCode=0 Nov 24 22:10:02 crc kubenswrapper[4767]: I1124 22:10:02.886581 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" event={"ID":"9ee9a8bf-0bd8-49fc-8421-1805014adfac","Type":"ContainerDied","Data":"7d9c0dda30b8dfeb00ab46637965e454cd5cd222ad82010a4cefa41280304187"} Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.286521 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.353261 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-ssh-key\") pod \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.353422 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjsv5\" (UniqueName: \"kubernetes.io/projected/9ee9a8bf-0bd8-49fc-8421-1805014adfac-kube-api-access-xjsv5\") pod \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.353576 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-inventory\") pod \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\" (UID: \"9ee9a8bf-0bd8-49fc-8421-1805014adfac\") " Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.358451 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee9a8bf-0bd8-49fc-8421-1805014adfac-kube-api-access-xjsv5" (OuterVolumeSpecName: "kube-api-access-xjsv5") pod "9ee9a8bf-0bd8-49fc-8421-1805014adfac" (UID: "9ee9a8bf-0bd8-49fc-8421-1805014adfac"). InnerVolumeSpecName "kube-api-access-xjsv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.379197 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-inventory" (OuterVolumeSpecName: "inventory") pod "9ee9a8bf-0bd8-49fc-8421-1805014adfac" (UID: "9ee9a8bf-0bd8-49fc-8421-1805014adfac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.381105 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9ee9a8bf-0bd8-49fc-8421-1805014adfac" (UID: "9ee9a8bf-0bd8-49fc-8421-1805014adfac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.455474 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.455511 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ee9a8bf-0bd8-49fc-8421-1805014adfac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.455521 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjsv5\" (UniqueName: \"kubernetes.io/projected/9ee9a8bf-0bd8-49fc-8421-1805014adfac-kube-api-access-xjsv5\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.911925 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" event={"ID":"9ee9a8bf-0bd8-49fc-8421-1805014adfac","Type":"ContainerDied","Data":"cc44b008e3239efa8c327ab8b272c14bd280874f0cacdf6da3e1a1f6057ede58"} Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.912203 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc44b008e3239efa8c327ab8b272c14bd280874f0cacdf6da3e1a1f6057ede58" Nov 24 22:10:04 crc kubenswrapper[4767]: I1124 22:10:04.912211 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.058795 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq"] Nov 24 22:10:05 crc kubenswrapper[4767]: E1124 22:10:05.059163 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee9a8bf-0bd8-49fc-8421-1805014adfac" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.059179 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee9a8bf-0bd8-49fc-8421-1805014adfac" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.059388 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee9a8bf-0bd8-49fc-8421-1805014adfac" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.060016 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.062649 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.062674 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.063006 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.063023 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.063633 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.063729 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.064764 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.070032 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.113576 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq"] Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170330 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170386 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170420 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170442 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170484 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170502 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170518 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170537 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170575 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170615 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mgk\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-kube-api-access-q4mgk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170645 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170661 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170678 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.170715 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.272905 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273528 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273621 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273663 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273690 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273720 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273779 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273847 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mgk\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-kube-api-access-q4mgk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273897 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273922 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273948 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.273987 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.274048 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.274073 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.278436 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.278514 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.278913 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.279950 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.280133 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.280327 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.280627 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.280637 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.281038 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.281632 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.285652 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.285675 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.287217 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.296706 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mgk\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-kube-api-access-q4mgk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.378556 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:05 crc kubenswrapper[4767]: I1124 22:10:05.919155 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq"] Nov 24 22:10:06 crc kubenswrapper[4767]: I1124 22:10:06.929092 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" event={"ID":"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f","Type":"ContainerStarted","Data":"54759a5c13332598a146f626ac99396084ae79624d0c90f5183925eca91c4d95"} Nov 24 22:10:06 crc kubenswrapper[4767]: I1124 22:10:06.929428 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" event={"ID":"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f","Type":"ContainerStarted","Data":"21cb10ef56eee56797cada7f2a301aa37ae0e9d6342682680e2cc550f13066a0"} Nov 24 22:10:06 crc kubenswrapper[4767]: I1124 22:10:06.947756 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" podStartSLOduration=1.245123754 podStartE2EDuration="1.94774172s" podCreationTimestamp="2025-11-24 22:10:05 +0000 UTC" firstStartedPulling="2025-11-24 22:10:05.924917649 +0000 UTC m=+1888.841901021" lastFinishedPulling="2025-11-24 22:10:06.627535605 +0000 UTC m=+1889.544518987" observedRunningTime="2025-11-24 22:10:06.945137247 +0000 UTC m=+1889.862120629" watchObservedRunningTime="2025-11-24 22:10:06.94774172 +0000 UTC m=+1889.864725092" Nov 24 22:10:16 crc kubenswrapper[4767]: I1124 22:10:16.806744 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jjk4x" podUID="b7220fb1-add2-490e-9a22-09ca48f0de97" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 22:10:51 crc kubenswrapper[4767]: I1124 22:10:51.401932 4767 generic.go:334] "Generic (PLEG): container finished" podID="dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" containerID="54759a5c13332598a146f626ac99396084ae79624d0c90f5183925eca91c4d95" exitCode=0 Nov 24 22:10:51 crc kubenswrapper[4767]: I1124 22:10:51.401995 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" event={"ID":"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f","Type":"ContainerDied","Data":"54759a5c13332598a146f626ac99396084ae79624d0c90f5183925eca91c4d95"} Nov 24 22:10:52 crc kubenswrapper[4767]: I1124 22:10:52.966759 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.132358 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.132676 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-neutron-metadata-combined-ca-bundle\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.132884 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-inventory\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.132998 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-repo-setup-combined-ca-bundle\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.133170 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4mgk\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-kube-api-access-q4mgk\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.133349 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ovn-combined-ca-bundle\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.133911 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ssh-key\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.134068 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.134253 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-nova-combined-ca-bundle\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.134549 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-libvirt-combined-ca-bundle\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.134788 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.134922 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.135038 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-telemetry-combined-ca-bundle\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.135543 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-bootstrap-combined-ca-bundle\") pod \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\" (UID: \"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f\") " Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.139315 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-kube-api-access-q4mgk" (OuterVolumeSpecName: "kube-api-access-q4mgk") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "kube-api-access-q4mgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.139864 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.141216 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.143107 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.145133 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.147417 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.147482 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.147530 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.147554 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.148337 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.150390 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.167724 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.188323 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.198440 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-inventory" (OuterVolumeSpecName: "inventory") pod "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" (UID: "dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238806 4767 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238871 4767 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238891 4767 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238911 4767 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238931 4767 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238950 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238967 4767 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.238985 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4mgk\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-kube-api-access-q4mgk\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.239005 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.239022 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.239039 4767 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.239056 4767 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.239074 4767 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.239091 4767 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.428155 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" event={"ID":"dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f","Type":"ContainerDied","Data":"21cb10ef56eee56797cada7f2a301aa37ae0e9d6342682680e2cc550f13066a0"} Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.428236 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21cb10ef56eee56797cada7f2a301aa37ae0e9d6342682680e2cc550f13066a0" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.428355 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.551127 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq"] Nov 24 22:10:53 crc kubenswrapper[4767]: E1124 22:10:53.551758 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.551784 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.552046 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.553011 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.563167 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.563196 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.563516 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.563621 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.563821 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.566146 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq"] Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.650788 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.650838 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ff4d\" (UniqueName: \"kubernetes.io/projected/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-kube-api-access-8ff4d\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.651015 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.651253 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.651351 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.753793 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.753916 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.754151 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.754191 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ff4d\" (UniqueName: \"kubernetes.io/projected/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-kube-api-access-8ff4d\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.754336 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.755664 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.759580 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.760562 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.761184 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.773877 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ff4d\" (UniqueName: \"kubernetes.io/projected/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-kube-api-access-8ff4d\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6qclq\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:53 crc kubenswrapper[4767]: I1124 22:10:53.878873 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:10:54 crc kubenswrapper[4767]: I1124 22:10:54.533136 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq"] Nov 24 22:10:55 crc kubenswrapper[4767]: I1124 22:10:55.445470 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" event={"ID":"1de492fb-e45f-40d5-8115-0c5a9ae9e49a","Type":"ContainerStarted","Data":"90ced397c015b52ebaf0c04cb0c0fd27f88b44d50f44f467a6b46c1530b6391f"} Nov 24 22:10:55 crc kubenswrapper[4767]: I1124 22:10:55.446262 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" event={"ID":"1de492fb-e45f-40d5-8115-0c5a9ae9e49a","Type":"ContainerStarted","Data":"2b1b4340f88649cee058270617f217b42632b1e5442950109a5a643a8cb3b1da"} Nov 24 22:10:55 crc kubenswrapper[4767]: I1124 22:10:55.462028 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" podStartSLOduration=2.003604727 podStartE2EDuration="2.462005223s" podCreationTimestamp="2025-11-24 22:10:53 +0000 UTC" firstStartedPulling="2025-11-24 22:10:54.541540538 +0000 UTC m=+1937.458523920" lastFinishedPulling="2025-11-24 22:10:54.999941024 +0000 UTC m=+1937.916924416" observedRunningTime="2025-11-24 22:10:55.459812881 +0000 UTC m=+1938.376796263" watchObservedRunningTime="2025-11-24 22:10:55.462005223 +0000 UTC m=+1938.378988605" Nov 24 22:11:05 crc kubenswrapper[4767]: I1124 22:11:05.481401 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:11:05 crc kubenswrapper[4767]: I1124 22:11:05.482051 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:11:35 crc kubenswrapper[4767]: I1124 22:11:35.481823 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:11:35 crc kubenswrapper[4767]: I1124 22:11:35.482977 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:12:05 crc kubenswrapper[4767]: I1124 22:12:05.481706 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:12:05 crc kubenswrapper[4767]: I1124 22:12:05.482287 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:12:05 crc kubenswrapper[4767]: I1124 22:12:05.482340 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:12:05 crc kubenswrapper[4767]: I1124 22:12:05.483161 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5212ee13f9ec884476c9d08510699ab10c1815cd84c7d59fe73ece4597feed64"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:12:05 crc kubenswrapper[4767]: I1124 22:12:05.483236 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://5212ee13f9ec884476c9d08510699ab10c1815cd84c7d59fe73ece4597feed64" gracePeriod=600 Nov 24 22:12:06 crc kubenswrapper[4767]: I1124 22:12:06.208327 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="5212ee13f9ec884476c9d08510699ab10c1815cd84c7d59fe73ece4597feed64" exitCode=0 Nov 24 22:12:06 crc kubenswrapper[4767]: I1124 22:12:06.208419 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"5212ee13f9ec884476c9d08510699ab10c1815cd84c7d59fe73ece4597feed64"} Nov 24 22:12:06 crc kubenswrapper[4767]: I1124 22:12:06.209639 4767 scope.go:117] "RemoveContainer" containerID="f8cda6d39cf1c40ba25f60dd496de6f5cf98bd3d4990c2169ce8d8dfc7f3532c" Nov 24 22:12:06 crc kubenswrapper[4767]: I1124 22:12:06.209624 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d"} Nov 24 22:12:08 crc kubenswrapper[4767]: I1124 22:12:08.235656 4767 generic.go:334] "Generic (PLEG): container finished" podID="1de492fb-e45f-40d5-8115-0c5a9ae9e49a" containerID="90ced397c015b52ebaf0c04cb0c0fd27f88b44d50f44f467a6b46c1530b6391f" exitCode=0 Nov 24 22:12:08 crc kubenswrapper[4767]: I1124 22:12:08.236263 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" event={"ID":"1de492fb-e45f-40d5-8115-0c5a9ae9e49a","Type":"ContainerDied","Data":"90ced397c015b52ebaf0c04cb0c0fd27f88b44d50f44f467a6b46c1530b6391f"} Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.745392 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.845075 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-inventory\") pod \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.845526 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovn-combined-ca-bundle\") pod \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.845687 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ff4d\" (UniqueName: \"kubernetes.io/projected/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-kube-api-access-8ff4d\") pod \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.845718 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ssh-key\") pod \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.846355 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovncontroller-config-0\") pod \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\" (UID: \"1de492fb-e45f-40d5-8115-0c5a9ae9e49a\") " Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.852175 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-kube-api-access-8ff4d" (OuterVolumeSpecName: "kube-api-access-8ff4d") pod "1de492fb-e45f-40d5-8115-0c5a9ae9e49a" (UID: "1de492fb-e45f-40d5-8115-0c5a9ae9e49a"). InnerVolumeSpecName "kube-api-access-8ff4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.852190 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1de492fb-e45f-40d5-8115-0c5a9ae9e49a" (UID: "1de492fb-e45f-40d5-8115-0c5a9ae9e49a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.878183 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1de492fb-e45f-40d5-8115-0c5a9ae9e49a" (UID: "1de492fb-e45f-40d5-8115-0c5a9ae9e49a"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.885881 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-inventory" (OuterVolumeSpecName: "inventory") pod "1de492fb-e45f-40d5-8115-0c5a9ae9e49a" (UID: "1de492fb-e45f-40d5-8115-0c5a9ae9e49a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.889072 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1de492fb-e45f-40d5-8115-0c5a9ae9e49a" (UID: "1de492fb-e45f-40d5-8115-0c5a9ae9e49a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.948915 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ff4d\" (UniqueName: \"kubernetes.io/projected/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-kube-api-access-8ff4d\") on node \"crc\" DevicePath \"\"" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.948954 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.948967 4767 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.948978 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:12:09 crc kubenswrapper[4767]: I1124 22:12:09.948989 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de492fb-e45f-40d5-8115-0c5a9ae9e49a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.267834 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" event={"ID":"1de492fb-e45f-40d5-8115-0c5a9ae9e49a","Type":"ContainerDied","Data":"2b1b4340f88649cee058270617f217b42632b1e5442950109a5a643a8cb3b1da"} Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.267886 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1b4340f88649cee058270617f217b42632b1e5442950109a5a643a8cb3b1da" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.267963 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6qclq" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.372373 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7"] Nov 24 22:12:10 crc kubenswrapper[4767]: E1124 22:12:10.372844 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de492fb-e45f-40d5-8115-0c5a9ae9e49a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.372862 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de492fb-e45f-40d5-8115-0c5a9ae9e49a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.373133 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de492fb-e45f-40d5-8115-0c5a9ae9e49a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.373989 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.376359 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.377518 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.377781 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.378097 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.378193 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.378220 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.385907 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7"] Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.459597 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.459928 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.459959 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.460024 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.460191 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.460256 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8v6\" (UniqueName: \"kubernetes.io/projected/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-kube-api-access-lf8v6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.562265 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.562358 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.562443 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.562546 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.562594 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8v6\" (UniqueName: \"kubernetes.io/projected/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-kube-api-access-lf8v6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.562807 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.568082 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.568124 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.569004 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.569574 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.570319 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.590223 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8v6\" (UniqueName: \"kubernetes.io/projected/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-kube-api-access-lf8v6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:10 crc kubenswrapper[4767]: I1124 22:12:10.702659 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:12:11 crc kubenswrapper[4767]: I1124 22:12:11.245736 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7"] Nov 24 22:12:11 crc kubenswrapper[4767]: I1124 22:12:11.282449 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" event={"ID":"73ed4c4b-18b6-4d28-b0b2-f1a480963c46","Type":"ContainerStarted","Data":"6aa7d5a855d4a14aaaa5b6a7c0f218c0bf7fe983e3d2553c7988608839035114"} Nov 24 22:12:12 crc kubenswrapper[4767]: I1124 22:12:12.293313 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" event={"ID":"73ed4c4b-18b6-4d28-b0b2-f1a480963c46","Type":"ContainerStarted","Data":"f4d43ae19e472f2a0ae103a338af40a2ceae0206f0e950dcf51f7c4c3b7acbff"} Nov 24 22:12:12 crc kubenswrapper[4767]: I1124 22:12:12.331635 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" podStartSLOduration=1.770093929 podStartE2EDuration="2.331597918s" podCreationTimestamp="2025-11-24 22:12:10 +0000 UTC" firstStartedPulling="2025-11-24 22:12:11.258247421 +0000 UTC m=+2014.175230793" lastFinishedPulling="2025-11-24 22:12:11.81975142 +0000 UTC m=+2014.736734782" observedRunningTime="2025-11-24 22:12:12.318821655 +0000 UTC m=+2015.235805047" watchObservedRunningTime="2025-11-24 22:12:12.331597918 +0000 UTC m=+2015.248581330" Nov 24 22:13:03 crc kubenswrapper[4767]: I1124 22:13:03.813174 4767 generic.go:334] "Generic (PLEG): container finished" podID="73ed4c4b-18b6-4d28-b0b2-f1a480963c46" containerID="f4d43ae19e472f2a0ae103a338af40a2ceae0206f0e950dcf51f7c4c3b7acbff" exitCode=0 Nov 24 22:13:03 crc kubenswrapper[4767]: I1124 22:13:03.813243 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" event={"ID":"73ed4c4b-18b6-4d28-b0b2-f1a480963c46","Type":"ContainerDied","Data":"f4d43ae19e472f2a0ae103a338af40a2ceae0206f0e950dcf51f7c4c3b7acbff"} Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.309405 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.351205 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-metadata-combined-ca-bundle\") pod \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.351308 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-ovn-metadata-agent-neutron-config-0\") pod \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.351396 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-nova-metadata-neutron-config-0\") pod \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.351493 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-ssh-key\") pod \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.351669 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8v6\" (UniqueName: \"kubernetes.io/projected/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-kube-api-access-lf8v6\") pod \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.351869 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-inventory\") pod \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\" (UID: \"73ed4c4b-18b6-4d28-b0b2-f1a480963c46\") " Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.361471 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "73ed4c4b-18b6-4d28-b0b2-f1a480963c46" (UID: "73ed4c4b-18b6-4d28-b0b2-f1a480963c46"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.361554 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-kube-api-access-lf8v6" (OuterVolumeSpecName: "kube-api-access-lf8v6") pod "73ed4c4b-18b6-4d28-b0b2-f1a480963c46" (UID: "73ed4c4b-18b6-4d28-b0b2-f1a480963c46"). InnerVolumeSpecName "kube-api-access-lf8v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.390636 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "73ed4c4b-18b6-4d28-b0b2-f1a480963c46" (UID: "73ed4c4b-18b6-4d28-b0b2-f1a480963c46"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.390688 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-inventory" (OuterVolumeSpecName: "inventory") pod "73ed4c4b-18b6-4d28-b0b2-f1a480963c46" (UID: "73ed4c4b-18b6-4d28-b0b2-f1a480963c46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.391023 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "73ed4c4b-18b6-4d28-b0b2-f1a480963c46" (UID: "73ed4c4b-18b6-4d28-b0b2-f1a480963c46"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.401595 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "73ed4c4b-18b6-4d28-b0b2-f1a480963c46" (UID: "73ed4c4b-18b6-4d28-b0b2-f1a480963c46"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.454859 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8v6\" (UniqueName: \"kubernetes.io/projected/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-kube-api-access-lf8v6\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.454905 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.454917 4767 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.454927 4767 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.454937 4767 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.454947 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73ed4c4b-18b6-4d28-b0b2-f1a480963c46-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.839831 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" event={"ID":"73ed4c4b-18b6-4d28-b0b2-f1a480963c46","Type":"ContainerDied","Data":"6aa7d5a855d4a14aaaa5b6a7c0f218c0bf7fe983e3d2553c7988608839035114"} Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.839886 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa7d5a855d4a14aaaa5b6a7c0f218c0bf7fe983e3d2553c7988608839035114" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.839903 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.949066 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2"] Nov 24 22:13:05 crc kubenswrapper[4767]: E1124 22:13:05.949522 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ed4c4b-18b6-4d28-b0b2-f1a480963c46" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.949549 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ed4c4b-18b6-4d28-b0b2-f1a480963c46" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.949754 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ed4c4b-18b6-4d28-b0b2-f1a480963c46" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.950474 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.955478 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.955882 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.956350 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.956509 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.957166 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:13:05 crc kubenswrapper[4767]: I1124 22:13:05.976396 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2"] Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.064463 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.064552 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.064789 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.064839 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5t92\" (UniqueName: \"kubernetes.io/projected/12cea285-00cd-40e4-b751-75563f414f33-kube-api-access-v5t92\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.064918 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.167373 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.167473 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.167571 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.167597 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5t92\" (UniqueName: \"kubernetes.io/projected/12cea285-00cd-40e4-b751-75563f414f33-kube-api-access-v5t92\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.168120 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.171487 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.173367 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.173491 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.173743 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.184538 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5t92\" (UniqueName: \"kubernetes.io/projected/12cea285-00cd-40e4-b751-75563f414f33-kube-api-access-v5t92\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.271088 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.786333 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2"] Nov 24 22:13:06 crc kubenswrapper[4767]: I1124 22:13:06.848247 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" event={"ID":"12cea285-00cd-40e4-b751-75563f414f33","Type":"ContainerStarted","Data":"ca57ad9bfc223bf1d60111d12de9e00e7b9c8b06fbc6e8e5e9a5cf9b9a4023b5"} Nov 24 22:13:07 crc kubenswrapper[4767]: I1124 22:13:07.865941 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" event={"ID":"12cea285-00cd-40e4-b751-75563f414f33","Type":"ContainerStarted","Data":"9def48b776eb1417da4f47ecc8f23a14e1f0b91fb4f937d86612de800a3c46b6"} Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.047611 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" podStartSLOduration=9.629205455 podStartE2EDuration="10.047585646s" podCreationTimestamp="2025-11-24 22:13:05 +0000 UTC" firstStartedPulling="2025-11-24 22:13:06.794498058 +0000 UTC m=+2069.711481430" lastFinishedPulling="2025-11-24 22:13:07.212878249 +0000 UTC m=+2070.129861621" observedRunningTime="2025-11-24 22:13:07.888899652 +0000 UTC m=+2070.805883024" watchObservedRunningTime="2025-11-24 22:13:15.047585646 +0000 UTC m=+2077.964569018" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.050120 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rnwn9"] Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.057206 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.064400 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnwn9"] Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.141304 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2jt8\" (UniqueName: \"kubernetes.io/projected/18c57899-4216-446e-b594-fb85e797cbaf-kube-api-access-l2jt8\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.141418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-utilities\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.141486 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-catalog-content\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.243254 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2jt8\" (UniqueName: \"kubernetes.io/projected/18c57899-4216-446e-b594-fb85e797cbaf-kube-api-access-l2jt8\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.243379 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-utilities\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.243420 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-catalog-content\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.243932 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-catalog-content\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.244458 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-utilities\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.265215 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2jt8\" (UniqueName: \"kubernetes.io/projected/18c57899-4216-446e-b594-fb85e797cbaf-kube-api-access-l2jt8\") pod \"redhat-operators-rnwn9\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.393737 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:15 crc kubenswrapper[4767]: I1124 22:13:15.933323 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnwn9"] Nov 24 22:13:16 crc kubenswrapper[4767]: I1124 22:13:16.947978 4767 generic.go:334] "Generic (PLEG): container finished" podID="18c57899-4216-446e-b594-fb85e797cbaf" containerID="c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55" exitCode=0 Nov 24 22:13:16 crc kubenswrapper[4767]: I1124 22:13:16.948037 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnwn9" event={"ID":"18c57899-4216-446e-b594-fb85e797cbaf","Type":"ContainerDied","Data":"c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55"} Nov 24 22:13:16 crc kubenswrapper[4767]: I1124 22:13:16.948800 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnwn9" event={"ID":"18c57899-4216-446e-b594-fb85e797cbaf","Type":"ContainerStarted","Data":"075afe21c0373a1b4a1c8ea04a62f4b3912677899c2718d1116f1d78c7c81dee"} Nov 24 22:13:17 crc kubenswrapper[4767]: I1124 22:13:17.958639 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnwn9" event={"ID":"18c57899-4216-446e-b594-fb85e797cbaf","Type":"ContainerStarted","Data":"4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d"} Nov 24 22:13:19 crc kubenswrapper[4767]: I1124 22:13:19.980041 4767 generic.go:334] "Generic (PLEG): container finished" podID="18c57899-4216-446e-b594-fb85e797cbaf" containerID="4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d" exitCode=0 Nov 24 22:13:19 crc kubenswrapper[4767]: I1124 22:13:19.980121 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnwn9" event={"ID":"18c57899-4216-446e-b594-fb85e797cbaf","Type":"ContainerDied","Data":"4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d"} Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.628642 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8rnmb"] Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.631285 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.639968 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rnmb"] Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.753137 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-catalog-content\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.753189 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhh4\" (UniqueName: \"kubernetes.io/projected/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-kube-api-access-2hhh4\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.753254 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-utilities\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.855341 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-catalog-content\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.855632 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hhh4\" (UniqueName: \"kubernetes.io/projected/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-kube-api-access-2hhh4\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.855728 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-utilities\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.855979 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-catalog-content\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.856301 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-utilities\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.880995 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hhh4\" (UniqueName: \"kubernetes.io/projected/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-kube-api-access-2hhh4\") pod \"redhat-marketplace-8rnmb\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.955049 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:20 crc kubenswrapper[4767]: I1124 22:13:20.999631 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnwn9" event={"ID":"18c57899-4216-446e-b594-fb85e797cbaf","Type":"ContainerStarted","Data":"43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d"} Nov 24 22:13:21 crc kubenswrapper[4767]: I1124 22:13:21.038161 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rnwn9" podStartSLOduration=2.245786828 podStartE2EDuration="6.038142938s" podCreationTimestamp="2025-11-24 22:13:15 +0000 UTC" firstStartedPulling="2025-11-24 22:13:16.949831414 +0000 UTC m=+2079.866814786" lastFinishedPulling="2025-11-24 22:13:20.742187524 +0000 UTC m=+2083.659170896" observedRunningTime="2025-11-24 22:13:21.030389958 +0000 UTC m=+2083.947373330" watchObservedRunningTime="2025-11-24 22:13:21.038142938 +0000 UTC m=+2083.955126310" Nov 24 22:13:21 crc kubenswrapper[4767]: I1124 22:13:21.430097 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rnmb"] Nov 24 22:13:22 crc kubenswrapper[4767]: I1124 22:13:22.009793 4767 generic.go:334] "Generic (PLEG): container finished" podID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerID="b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668" exitCode=0 Nov 24 22:13:22 crc kubenswrapper[4767]: I1124 22:13:22.009981 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rnmb" event={"ID":"da2eb229-fe73-4271-9c8a-3aa1dfdfa644","Type":"ContainerDied","Data":"b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668"} Nov 24 22:13:22 crc kubenswrapper[4767]: I1124 22:13:22.011315 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rnmb" event={"ID":"da2eb229-fe73-4271-9c8a-3aa1dfdfa644","Type":"ContainerStarted","Data":"cf7065acf5335be1699e5b65d7dfb6af39a7a50536df0e0a09d20780a2fff9bb"} Nov 24 22:13:23 crc kubenswrapper[4767]: I1124 22:13:23.020968 4767 generic.go:334] "Generic (PLEG): container finished" podID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerID="f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf" exitCode=0 Nov 24 22:13:23 crc kubenswrapper[4767]: I1124 22:13:23.021035 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rnmb" event={"ID":"da2eb229-fe73-4271-9c8a-3aa1dfdfa644","Type":"ContainerDied","Data":"f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf"} Nov 24 22:13:24 crc kubenswrapper[4767]: I1124 22:13:24.053694 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rnmb" event={"ID":"da2eb229-fe73-4271-9c8a-3aa1dfdfa644","Type":"ContainerStarted","Data":"bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42"} Nov 24 22:13:24 crc kubenswrapper[4767]: I1124 22:13:24.094633 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8rnmb" podStartSLOduration=2.687151736 podStartE2EDuration="4.094611558s" podCreationTimestamp="2025-11-24 22:13:20 +0000 UTC" firstStartedPulling="2025-11-24 22:13:22.011734575 +0000 UTC m=+2084.928717947" lastFinishedPulling="2025-11-24 22:13:23.419194397 +0000 UTC m=+2086.336177769" observedRunningTime="2025-11-24 22:13:24.087561968 +0000 UTC m=+2087.004545350" watchObservedRunningTime="2025-11-24 22:13:24.094611558 +0000 UTC m=+2087.011594930" Nov 24 22:13:25 crc kubenswrapper[4767]: I1124 22:13:25.394807 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:25 crc kubenswrapper[4767]: I1124 22:13:25.395195 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:26 crc kubenswrapper[4767]: I1124 22:13:26.444353 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rnwn9" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="registry-server" probeResult="failure" output=< Nov 24 22:13:26 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 22:13:26 crc kubenswrapper[4767]: > Nov 24 22:13:30 crc kubenswrapper[4767]: I1124 22:13:30.956158 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:30 crc kubenswrapper[4767]: I1124 22:13:30.956538 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:31 crc kubenswrapper[4767]: I1124 22:13:31.022989 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:31 crc kubenswrapper[4767]: I1124 22:13:31.186971 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:31 crc kubenswrapper[4767]: I1124 22:13:31.262477 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rnmb"] Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.152634 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8rnmb" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="registry-server" containerID="cri-o://bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42" gracePeriod=2 Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.638206 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.831588 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-catalog-content\") pod \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.831748 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-utilities\") pod \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.831923 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hhh4\" (UniqueName: \"kubernetes.io/projected/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-kube-api-access-2hhh4\") pod \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\" (UID: \"da2eb229-fe73-4271-9c8a-3aa1dfdfa644\") " Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.833922 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-utilities" (OuterVolumeSpecName: "utilities") pod "da2eb229-fe73-4271-9c8a-3aa1dfdfa644" (UID: "da2eb229-fe73-4271-9c8a-3aa1dfdfa644"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.839681 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-kube-api-access-2hhh4" (OuterVolumeSpecName: "kube-api-access-2hhh4") pod "da2eb229-fe73-4271-9c8a-3aa1dfdfa644" (UID: "da2eb229-fe73-4271-9c8a-3aa1dfdfa644"). InnerVolumeSpecName "kube-api-access-2hhh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.854373 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da2eb229-fe73-4271-9c8a-3aa1dfdfa644" (UID: "da2eb229-fe73-4271-9c8a-3aa1dfdfa644"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.934892 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.934917 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:33 crc kubenswrapper[4767]: I1124 22:13:33.934927 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hhh4\" (UniqueName: \"kubernetes.io/projected/da2eb229-fe73-4271-9c8a-3aa1dfdfa644-kube-api-access-2hhh4\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.167587 4767 generic.go:334] "Generic (PLEG): container finished" podID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerID="bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42" exitCode=0 Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.167652 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rnmb" event={"ID":"da2eb229-fe73-4271-9c8a-3aa1dfdfa644","Type":"ContainerDied","Data":"bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42"} Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.167685 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rnmb" event={"ID":"da2eb229-fe73-4271-9c8a-3aa1dfdfa644","Type":"ContainerDied","Data":"cf7065acf5335be1699e5b65d7dfb6af39a7a50536df0e0a09d20780a2fff9bb"} Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.167690 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rnmb" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.167705 4767 scope.go:117] "RemoveContainer" containerID="bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.209447 4767 scope.go:117] "RemoveContainer" containerID="f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.225875 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rnmb"] Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.242389 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rnmb"] Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.249049 4767 scope.go:117] "RemoveContainer" containerID="b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.283624 4767 scope.go:117] "RemoveContainer" containerID="bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42" Nov 24 22:13:34 crc kubenswrapper[4767]: E1124 22:13:34.284112 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42\": container with ID starting with bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42 not found: ID does not exist" containerID="bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.284157 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42"} err="failed to get container status \"bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42\": rpc error: code = NotFound desc = could not find container \"bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42\": container with ID starting with bc71474cf72c4255867453dca16f65323346eb873e61d59e6de216d200afce42 not found: ID does not exist" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.284204 4767 scope.go:117] "RemoveContainer" containerID="f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf" Nov 24 22:13:34 crc kubenswrapper[4767]: E1124 22:13:34.284621 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf\": container with ID starting with f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf not found: ID does not exist" containerID="f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.284649 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf"} err="failed to get container status \"f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf\": rpc error: code = NotFound desc = could not find container \"f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf\": container with ID starting with f26c61f4e4e1b4169b175ee0fa00135804920d8057bf1a330067c87992181ddf not found: ID does not exist" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.284667 4767 scope.go:117] "RemoveContainer" containerID="b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668" Nov 24 22:13:34 crc kubenswrapper[4767]: E1124 22:13:34.284912 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668\": container with ID starting with b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668 not found: ID does not exist" containerID="b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.284941 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668"} err="failed to get container status \"b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668\": rpc error: code = NotFound desc = could not find container \"b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668\": container with ID starting with b2d0183fd8e923941e36d5ce2bcc21ba6955d617c8a0ec3e113ae84802e62668 not found: ID does not exist" Nov 24 22:13:34 crc kubenswrapper[4767]: I1124 22:13:34.328181 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" path="/var/lib/kubelet/pods/da2eb229-fe73-4271-9c8a-3aa1dfdfa644/volumes" Nov 24 22:13:35 crc kubenswrapper[4767]: I1124 22:13:35.486115 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:35 crc kubenswrapper[4767]: I1124 22:13:35.551646 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:36 crc kubenswrapper[4767]: I1124 22:13:36.662477 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rnwn9"] Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.202676 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rnwn9" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="registry-server" containerID="cri-o://43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d" gracePeriod=2 Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.721184 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.810600 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-catalog-content\") pod \"18c57899-4216-446e-b594-fb85e797cbaf\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.810640 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2jt8\" (UniqueName: \"kubernetes.io/projected/18c57899-4216-446e-b594-fb85e797cbaf-kube-api-access-l2jt8\") pod \"18c57899-4216-446e-b594-fb85e797cbaf\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.810832 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-utilities\") pod \"18c57899-4216-446e-b594-fb85e797cbaf\" (UID: \"18c57899-4216-446e-b594-fb85e797cbaf\") " Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.811942 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-utilities" (OuterVolumeSpecName: "utilities") pod "18c57899-4216-446e-b594-fb85e797cbaf" (UID: "18c57899-4216-446e-b594-fb85e797cbaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.816846 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c57899-4216-446e-b594-fb85e797cbaf-kube-api-access-l2jt8" (OuterVolumeSpecName: "kube-api-access-l2jt8") pod "18c57899-4216-446e-b594-fb85e797cbaf" (UID: "18c57899-4216-446e-b594-fb85e797cbaf"). InnerVolumeSpecName "kube-api-access-l2jt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.913688 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2jt8\" (UniqueName: \"kubernetes.io/projected/18c57899-4216-446e-b594-fb85e797cbaf-kube-api-access-l2jt8\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.914004 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:37 crc kubenswrapper[4767]: I1124 22:13:37.914227 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18c57899-4216-446e-b594-fb85e797cbaf" (UID: "18c57899-4216-446e-b594-fb85e797cbaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.015412 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c57899-4216-446e-b594-fb85e797cbaf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.212609 4767 generic.go:334] "Generic (PLEG): container finished" podID="18c57899-4216-446e-b594-fb85e797cbaf" containerID="43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d" exitCode=0 Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.212645 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnwn9" event={"ID":"18c57899-4216-446e-b594-fb85e797cbaf","Type":"ContainerDied","Data":"43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d"} Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.212670 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnwn9" event={"ID":"18c57899-4216-446e-b594-fb85e797cbaf","Type":"ContainerDied","Data":"075afe21c0373a1b4a1c8ea04a62f4b3912677899c2718d1116f1d78c7c81dee"} Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.212686 4767 scope.go:117] "RemoveContainer" containerID="43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.212694 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnwn9" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.233428 4767 scope.go:117] "RemoveContainer" containerID="4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.250574 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rnwn9"] Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.270434 4767 scope.go:117] "RemoveContainer" containerID="c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.292643 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rnwn9"] Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.308491 4767 scope.go:117] "RemoveContainer" containerID="43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d" Nov 24 22:13:38 crc kubenswrapper[4767]: E1124 22:13:38.308965 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d\": container with ID starting with 43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d not found: ID does not exist" containerID="43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.309003 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d"} err="failed to get container status \"43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d\": rpc error: code = NotFound desc = could not find container \"43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d\": container with ID starting with 43113d40e58396a5478bc92049ef844aab9bca30dfa1a3fdb0afd6170a07114d not found: ID does not exist" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.309025 4767 scope.go:117] "RemoveContainer" containerID="4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d" Nov 24 22:13:38 crc kubenswrapper[4767]: E1124 22:13:38.309260 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d\": container with ID starting with 4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d not found: ID does not exist" containerID="4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.309311 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d"} err="failed to get container status \"4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d\": rpc error: code = NotFound desc = could not find container \"4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d\": container with ID starting with 4d822bf47dc4f15998dcbb59a4260eed720be694f90819615bb3ac279f2bca1d not found: ID does not exist" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.309331 4767 scope.go:117] "RemoveContainer" containerID="c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55" Nov 24 22:13:38 crc kubenswrapper[4767]: E1124 22:13:38.309620 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55\": container with ID starting with c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55 not found: ID does not exist" containerID="c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.309650 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55"} err="failed to get container status \"c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55\": rpc error: code = NotFound desc = could not find container \"c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55\": container with ID starting with c6d6302969d779948be89f7121602466474f3b6bdca3753fe532c05134a16a55 not found: ID does not exist" Nov 24 22:13:38 crc kubenswrapper[4767]: I1124 22:13:38.327457 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c57899-4216-446e-b594-fb85e797cbaf" path="/var/lib/kubelet/pods/18c57899-4216-446e-b594-fb85e797cbaf/volumes" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.224695 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-72lkf"] Nov 24 22:13:51 crc kubenswrapper[4767]: E1124 22:13:51.225621 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="extract-utilities" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225634 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="extract-utilities" Nov 24 22:13:51 crc kubenswrapper[4767]: E1124 22:13:51.225649 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="extract-utilities" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225657 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="extract-utilities" Nov 24 22:13:51 crc kubenswrapper[4767]: E1124 22:13:51.225667 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="registry-server" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225673 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="registry-server" Nov 24 22:13:51 crc kubenswrapper[4767]: E1124 22:13:51.225699 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="registry-server" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225705 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="registry-server" Nov 24 22:13:51 crc kubenswrapper[4767]: E1124 22:13:51.225720 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="extract-content" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225726 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="extract-content" Nov 24 22:13:51 crc kubenswrapper[4767]: E1124 22:13:51.225741 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="extract-content" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225746 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="extract-content" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225917 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c57899-4216-446e-b594-fb85e797cbaf" containerName="registry-server" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.225926 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2eb229-fe73-4271-9c8a-3aa1dfdfa644" containerName="registry-server" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.227420 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.240497 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72lkf"] Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.302004 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-catalog-content\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.302106 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rd7c\" (UniqueName: \"kubernetes.io/projected/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-kube-api-access-9rd7c\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.302335 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-utilities\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.404496 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rd7c\" (UniqueName: \"kubernetes.io/projected/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-kube-api-access-9rd7c\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.404630 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-utilities\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.404748 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-catalog-content\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.405488 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-utilities\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.405550 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-catalog-content\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.429160 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rd7c\" (UniqueName: \"kubernetes.io/projected/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-kube-api-access-9rd7c\") pod \"certified-operators-72lkf\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:51 crc kubenswrapper[4767]: I1124 22:13:51.568167 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:13:52 crc kubenswrapper[4767]: I1124 22:13:52.096245 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72lkf"] Nov 24 22:13:52 crc kubenswrapper[4767]: I1124 22:13:52.362770 4767 generic.go:334] "Generic (PLEG): container finished" podID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerID="33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828" exitCode=0 Nov 24 22:13:52 crc kubenswrapper[4767]: I1124 22:13:52.362817 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72lkf" event={"ID":"216bc7fd-bd10-4a94-a0aa-23e7e3063c60","Type":"ContainerDied","Data":"33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828"} Nov 24 22:13:52 crc kubenswrapper[4767]: I1124 22:13:52.362850 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72lkf" event={"ID":"216bc7fd-bd10-4a94-a0aa-23e7e3063c60","Type":"ContainerStarted","Data":"7fbcb51791ea6c3a46c2e4b47caa587b5f404c95f4b7f58ff9f07777d0f84676"} Nov 24 22:13:53 crc kubenswrapper[4767]: I1124 22:13:53.390178 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72lkf" event={"ID":"216bc7fd-bd10-4a94-a0aa-23e7e3063c60","Type":"ContainerStarted","Data":"8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811"} Nov 24 22:13:54 crc kubenswrapper[4767]: I1124 22:13:54.405079 4767 generic.go:334] "Generic (PLEG): container finished" podID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerID="8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811" exitCode=0 Nov 24 22:13:54 crc kubenswrapper[4767]: I1124 22:13:54.405415 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72lkf" event={"ID":"216bc7fd-bd10-4a94-a0aa-23e7e3063c60","Type":"ContainerDied","Data":"8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811"} Nov 24 22:13:54 crc kubenswrapper[4767]: I1124 22:13:54.406618 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72lkf" event={"ID":"216bc7fd-bd10-4a94-a0aa-23e7e3063c60","Type":"ContainerStarted","Data":"479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0"} Nov 24 22:13:54 crc kubenswrapper[4767]: I1124 22:13:54.426914 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-72lkf" podStartSLOduration=1.9922378809999999 podStartE2EDuration="3.426898637s" podCreationTimestamp="2025-11-24 22:13:51 +0000 UTC" firstStartedPulling="2025-11-24 22:13:52.364673111 +0000 UTC m=+2115.281656513" lastFinishedPulling="2025-11-24 22:13:53.799333897 +0000 UTC m=+2116.716317269" observedRunningTime="2025-11-24 22:13:54.424088527 +0000 UTC m=+2117.341071919" watchObservedRunningTime="2025-11-24 22:13:54.426898637 +0000 UTC m=+2117.343882009" Nov 24 22:14:01 crc kubenswrapper[4767]: I1124 22:14:01.569131 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:14:01 crc kubenswrapper[4767]: I1124 22:14:01.569672 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:14:01 crc kubenswrapper[4767]: I1124 22:14:01.640095 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:14:02 crc kubenswrapper[4767]: I1124 22:14:02.557971 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:14:02 crc kubenswrapper[4767]: I1124 22:14:02.618985 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72lkf"] Nov 24 22:14:04 crc kubenswrapper[4767]: I1124 22:14:04.525165 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-72lkf" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="registry-server" containerID="cri-o://479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0" gracePeriod=2 Nov 24 22:14:04 crc kubenswrapper[4767]: E1124 22:14:04.746302 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216bc7fd_bd10_4a94_a0aa_23e7e3063c60.slice/crio-479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:14:04 crc kubenswrapper[4767]: I1124 22:14:04.984776 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.111792 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-utilities\") pod \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.112259 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rd7c\" (UniqueName: \"kubernetes.io/projected/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-kube-api-access-9rd7c\") pod \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.112407 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-catalog-content\") pod \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\" (UID: \"216bc7fd-bd10-4a94-a0aa-23e7e3063c60\") " Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.113009 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-utilities" (OuterVolumeSpecName: "utilities") pod "216bc7fd-bd10-4a94-a0aa-23e7e3063c60" (UID: "216bc7fd-bd10-4a94-a0aa-23e7e3063c60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.113156 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.119435 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-kube-api-access-9rd7c" (OuterVolumeSpecName: "kube-api-access-9rd7c") pod "216bc7fd-bd10-4a94-a0aa-23e7e3063c60" (UID: "216bc7fd-bd10-4a94-a0aa-23e7e3063c60"). InnerVolumeSpecName "kube-api-access-9rd7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.164983 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "216bc7fd-bd10-4a94-a0aa-23e7e3063c60" (UID: "216bc7fd-bd10-4a94-a0aa-23e7e3063c60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.214778 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rd7c\" (UniqueName: \"kubernetes.io/projected/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-kube-api-access-9rd7c\") on node \"crc\" DevicePath \"\"" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.214834 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216bc7fd-bd10-4a94-a0aa-23e7e3063c60-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.481072 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.481137 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.540603 4767 generic.go:334] "Generic (PLEG): container finished" podID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerID="479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0" exitCode=0 Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.540669 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72lkf" event={"ID":"216bc7fd-bd10-4a94-a0aa-23e7e3063c60","Type":"ContainerDied","Data":"479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0"} Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.540685 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72lkf" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.540711 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72lkf" event={"ID":"216bc7fd-bd10-4a94-a0aa-23e7e3063c60","Type":"ContainerDied","Data":"7fbcb51791ea6c3a46c2e4b47caa587b5f404c95f4b7f58ff9f07777d0f84676"} Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.540735 4767 scope.go:117] "RemoveContainer" containerID="479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.575197 4767 scope.go:117] "RemoveContainer" containerID="8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.584140 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72lkf"] Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.601533 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-72lkf"] Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.610431 4767 scope.go:117] "RemoveContainer" containerID="33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.649626 4767 scope.go:117] "RemoveContainer" containerID="479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0" Nov 24 22:14:05 crc kubenswrapper[4767]: E1124 22:14:05.650158 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0\": container with ID starting with 479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0 not found: ID does not exist" containerID="479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.650230 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0"} err="failed to get container status \"479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0\": rpc error: code = NotFound desc = could not find container \"479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0\": container with ID starting with 479be9d94a08978e7b8625c2ea9b09c3c22e0127e31b42ebc2808eb87fb559d0 not found: ID does not exist" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.650299 4767 scope.go:117] "RemoveContainer" containerID="8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811" Nov 24 22:14:05 crc kubenswrapper[4767]: E1124 22:14:05.650704 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811\": container with ID starting with 8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811 not found: ID does not exist" containerID="8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.650751 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811"} err="failed to get container status \"8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811\": rpc error: code = NotFound desc = could not find container \"8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811\": container with ID starting with 8a042e7103bec5bf3761b81d059268649453cf58538df0ddbee12027f7ff3811 not found: ID does not exist" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.650777 4767 scope.go:117] "RemoveContainer" containerID="33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828" Nov 24 22:14:05 crc kubenswrapper[4767]: E1124 22:14:05.651346 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828\": container with ID starting with 33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828 not found: ID does not exist" containerID="33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828" Nov 24 22:14:05 crc kubenswrapper[4767]: I1124 22:14:05.651394 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828"} err="failed to get container status \"33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828\": rpc error: code = NotFound desc = could not find container \"33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828\": container with ID starting with 33dae4b404f067a2d3469f5e615ffbfca66273cd822fe014cae630d40bfc5828 not found: ID does not exist" Nov 24 22:14:06 crc kubenswrapper[4767]: I1124 22:14:06.323981 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" path="/var/lib/kubelet/pods/216bc7fd-bd10-4a94-a0aa-23e7e3063c60/volumes" Nov 24 22:14:35 crc kubenswrapper[4767]: I1124 22:14:35.481453 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:14:35 crc kubenswrapper[4767]: I1124 22:14:35.481991 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.158772 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs"] Nov 24 22:15:00 crc kubenswrapper[4767]: E1124 22:15:00.159582 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="registry-server" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.159598 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="registry-server" Nov 24 22:15:00 crc kubenswrapper[4767]: E1124 22:15:00.159634 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="extract-content" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.159643 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="extract-content" Nov 24 22:15:00 crc kubenswrapper[4767]: E1124 22:15:00.159661 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="extract-utilities" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.159671 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="extract-utilities" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.159875 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="216bc7fd-bd10-4a94-a0aa-23e7e3063c60" containerName="registry-server" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.160600 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.163478 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.163843 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.174238 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs"] Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.271386 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/555bcb9b-a2cc-4c32-9655-b14a430346cf-secret-volume\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.271492 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69zv\" (UniqueName: \"kubernetes.io/projected/555bcb9b-a2cc-4c32-9655-b14a430346cf-kube-api-access-n69zv\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.271568 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/555bcb9b-a2cc-4c32-9655-b14a430346cf-config-volume\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.373069 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n69zv\" (UniqueName: \"kubernetes.io/projected/555bcb9b-a2cc-4c32-9655-b14a430346cf-kube-api-access-n69zv\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.373464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/555bcb9b-a2cc-4c32-9655-b14a430346cf-config-volume\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.373724 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/555bcb9b-a2cc-4c32-9655-b14a430346cf-secret-volume\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.374776 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/555bcb9b-a2cc-4c32-9655-b14a430346cf-config-volume\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.381417 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/555bcb9b-a2cc-4c32-9655-b14a430346cf-secret-volume\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.390841 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69zv\" (UniqueName: \"kubernetes.io/projected/555bcb9b-a2cc-4c32-9655-b14a430346cf-kube-api-access-n69zv\") pod \"collect-profiles-29400375-vhcxs\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.492010 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:00 crc kubenswrapper[4767]: I1124 22:15:00.940519 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs"] Nov 24 22:15:01 crc kubenswrapper[4767]: I1124 22:15:01.149865 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" event={"ID":"555bcb9b-a2cc-4c32-9655-b14a430346cf","Type":"ContainerStarted","Data":"accb47d64b5665f3970f5f6a8b07656660b73631c553e67949bac000c7946fe2"} Nov 24 22:15:01 crc kubenswrapper[4767]: I1124 22:15:01.149942 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" event={"ID":"555bcb9b-a2cc-4c32-9655-b14a430346cf","Type":"ContainerStarted","Data":"260eaca2bf5a39927397f26604d82a756981f0d0e8ec0f7672cec0348051d481"} Nov 24 22:15:01 crc kubenswrapper[4767]: I1124 22:15:01.173328 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" podStartSLOduration=1.173309292 podStartE2EDuration="1.173309292s" podCreationTimestamp="2025-11-24 22:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:15:01.166239941 +0000 UTC m=+2184.083223313" watchObservedRunningTime="2025-11-24 22:15:01.173309292 +0000 UTC m=+2184.090292664" Nov 24 22:15:02 crc kubenswrapper[4767]: I1124 22:15:02.164187 4767 generic.go:334] "Generic (PLEG): container finished" podID="555bcb9b-a2cc-4c32-9655-b14a430346cf" containerID="accb47d64b5665f3970f5f6a8b07656660b73631c553e67949bac000c7946fe2" exitCode=0 Nov 24 22:15:02 crc kubenswrapper[4767]: I1124 22:15:02.164333 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" event={"ID":"555bcb9b-a2cc-4c32-9655-b14a430346cf","Type":"ContainerDied","Data":"accb47d64b5665f3970f5f6a8b07656660b73631c553e67949bac000c7946fe2"} Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.503566 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.636874 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/555bcb9b-a2cc-4c32-9655-b14a430346cf-config-volume\") pod \"555bcb9b-a2cc-4c32-9655-b14a430346cf\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.636936 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n69zv\" (UniqueName: \"kubernetes.io/projected/555bcb9b-a2cc-4c32-9655-b14a430346cf-kube-api-access-n69zv\") pod \"555bcb9b-a2cc-4c32-9655-b14a430346cf\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.637092 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/555bcb9b-a2cc-4c32-9655-b14a430346cf-secret-volume\") pod \"555bcb9b-a2cc-4c32-9655-b14a430346cf\" (UID: \"555bcb9b-a2cc-4c32-9655-b14a430346cf\") " Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.638048 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/555bcb9b-a2cc-4c32-9655-b14a430346cf-config-volume" (OuterVolumeSpecName: "config-volume") pod "555bcb9b-a2cc-4c32-9655-b14a430346cf" (UID: "555bcb9b-a2cc-4c32-9655-b14a430346cf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.644501 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555bcb9b-a2cc-4c32-9655-b14a430346cf-kube-api-access-n69zv" (OuterVolumeSpecName: "kube-api-access-n69zv") pod "555bcb9b-a2cc-4c32-9655-b14a430346cf" (UID: "555bcb9b-a2cc-4c32-9655-b14a430346cf"). InnerVolumeSpecName "kube-api-access-n69zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.645981 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555bcb9b-a2cc-4c32-9655-b14a430346cf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "555bcb9b-a2cc-4c32-9655-b14a430346cf" (UID: "555bcb9b-a2cc-4c32-9655-b14a430346cf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.739439 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/555bcb9b-a2cc-4c32-9655-b14a430346cf-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.739797 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/555bcb9b-a2cc-4c32-9655-b14a430346cf-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:03 crc kubenswrapper[4767]: I1124 22:15:03.739812 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n69zv\" (UniqueName: \"kubernetes.io/projected/555bcb9b-a2cc-4c32-9655-b14a430346cf-kube-api-access-n69zv\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:04 crc kubenswrapper[4767]: I1124 22:15:04.188480 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" event={"ID":"555bcb9b-a2cc-4c32-9655-b14a430346cf","Type":"ContainerDied","Data":"260eaca2bf5a39927397f26604d82a756981f0d0e8ec0f7672cec0348051d481"} Nov 24 22:15:04 crc kubenswrapper[4767]: I1124 22:15:04.188859 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260eaca2bf5a39927397f26604d82a756981f0d0e8ec0f7672cec0348051d481" Nov 24 22:15:04 crc kubenswrapper[4767]: I1124 22:15:04.188543 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs" Nov 24 22:15:04 crc kubenswrapper[4767]: I1124 22:15:04.267111 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk"] Nov 24 22:15:04 crc kubenswrapper[4767]: I1124 22:15:04.276693 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-vrtxk"] Nov 24 22:15:04 crc kubenswrapper[4767]: I1124 22:15:04.328339 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0934816-1e19-4894-a691-f3e53551062a" path="/var/lib/kubelet/pods/b0934816-1e19-4894-a691-f3e53551062a/volumes" Nov 24 22:15:05 crc kubenswrapper[4767]: I1124 22:15:05.481441 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:15:05 crc kubenswrapper[4767]: I1124 22:15:05.481713 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:15:05 crc kubenswrapper[4767]: I1124 22:15:05.481750 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:15:05 crc kubenswrapper[4767]: I1124 22:15:05.482180 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:15:05 crc kubenswrapper[4767]: I1124 22:15:05.482223 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" gracePeriod=600 Nov 24 22:15:05 crc kubenswrapper[4767]: E1124 22:15:05.608366 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:15:06 crc kubenswrapper[4767]: I1124 22:15:06.224368 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" exitCode=0 Nov 24 22:15:06 crc kubenswrapper[4767]: I1124 22:15:06.224412 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d"} Nov 24 22:15:06 crc kubenswrapper[4767]: I1124 22:15:06.224446 4767 scope.go:117] "RemoveContainer" containerID="5212ee13f9ec884476c9d08510699ab10c1815cd84c7d59fe73ece4597feed64" Nov 24 22:15:06 crc kubenswrapper[4767]: I1124 22:15:06.225202 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:15:06 crc kubenswrapper[4767]: E1124 22:15:06.225533 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:15:17 crc kubenswrapper[4767]: I1124 22:15:17.314302 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:15:17 crc kubenswrapper[4767]: E1124 22:15:17.315617 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:15:30 crc kubenswrapper[4767]: I1124 22:15:30.314777 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:15:30 crc kubenswrapper[4767]: E1124 22:15:30.315602 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:15:43 crc kubenswrapper[4767]: I1124 22:15:43.332642 4767 scope.go:117] "RemoveContainer" containerID="190d640e1bfc027105ca4e59f647df3347b3a6c15de52228a086112447438f1d" Nov 24 22:15:44 crc kubenswrapper[4767]: I1124 22:15:44.313736 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:15:44 crc kubenswrapper[4767]: E1124 22:15:44.314634 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:15:57 crc kubenswrapper[4767]: I1124 22:15:57.313829 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:15:57 crc kubenswrapper[4767]: E1124 22:15:57.315889 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:16:12 crc kubenswrapper[4767]: I1124 22:16:12.313251 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:16:12 crc kubenswrapper[4767]: E1124 22:16:12.314040 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:16:24 crc kubenswrapper[4767]: I1124 22:16:24.313872 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:16:24 crc kubenswrapper[4767]: E1124 22:16:24.314818 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:16:36 crc kubenswrapper[4767]: I1124 22:16:36.313819 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:16:36 crc kubenswrapper[4767]: E1124 22:16:36.314662 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:16:48 crc kubenswrapper[4767]: I1124 22:16:48.320359 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:16:48 crc kubenswrapper[4767]: E1124 22:16:48.321022 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:17:00 crc kubenswrapper[4767]: I1124 22:17:00.931693 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fmqnd"] Nov 24 22:17:00 crc kubenswrapper[4767]: E1124 22:17:00.932526 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555bcb9b-a2cc-4c32-9655-b14a430346cf" containerName="collect-profiles" Nov 24 22:17:00 crc kubenswrapper[4767]: I1124 22:17:00.932538 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="555bcb9b-a2cc-4c32-9655-b14a430346cf" containerName="collect-profiles" Nov 24 22:17:00 crc kubenswrapper[4767]: I1124 22:17:00.932722 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="555bcb9b-a2cc-4c32-9655-b14a430346cf" containerName="collect-profiles" Nov 24 22:17:00 crc kubenswrapper[4767]: I1124 22:17:00.934042 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:00 crc kubenswrapper[4767]: I1124 22:17:00.951450 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqnd"] Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.093873 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-utilities\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.094385 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-catalog-content\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.094558 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxhr\" (UniqueName: \"kubernetes.io/projected/9ac31544-be85-4085-9666-7213b1638074-kube-api-access-lkxhr\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.197158 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-utilities\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.197388 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-catalog-content\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.197576 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkxhr\" (UniqueName: \"kubernetes.io/projected/9ac31544-be85-4085-9666-7213b1638074-kube-api-access-lkxhr\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.197803 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-utilities\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.198006 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-catalog-content\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.226377 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkxhr\" (UniqueName: \"kubernetes.io/projected/9ac31544-be85-4085-9666-7213b1638074-kube-api-access-lkxhr\") pod \"community-operators-fmqnd\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.252079 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:01 crc kubenswrapper[4767]: I1124 22:17:01.795992 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqnd"] Nov 24 22:17:02 crc kubenswrapper[4767]: I1124 22:17:02.315618 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:17:02 crc kubenswrapper[4767]: E1124 22:17:02.316077 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:17:02 crc kubenswrapper[4767]: I1124 22:17:02.452348 4767 generic.go:334] "Generic (PLEG): container finished" podID="9ac31544-be85-4085-9666-7213b1638074" containerID="b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4" exitCode=0 Nov 24 22:17:02 crc kubenswrapper[4767]: I1124 22:17:02.452430 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqnd" event={"ID":"9ac31544-be85-4085-9666-7213b1638074","Type":"ContainerDied","Data":"b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4"} Nov 24 22:17:02 crc kubenswrapper[4767]: I1124 22:17:02.452467 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqnd" event={"ID":"9ac31544-be85-4085-9666-7213b1638074","Type":"ContainerStarted","Data":"f35d96e41cc546413f1c643ad90e3212b859212d0bfbb51fab36f1e0475edd6d"} Nov 24 22:17:02 crc kubenswrapper[4767]: I1124 22:17:02.456240 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:17:03 crc kubenswrapper[4767]: I1124 22:17:03.464584 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqnd" event={"ID":"9ac31544-be85-4085-9666-7213b1638074","Type":"ContainerStarted","Data":"46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0"} Nov 24 22:17:04 crc kubenswrapper[4767]: I1124 22:17:04.481103 4767 generic.go:334] "Generic (PLEG): container finished" podID="9ac31544-be85-4085-9666-7213b1638074" containerID="46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0" exitCode=0 Nov 24 22:17:04 crc kubenswrapper[4767]: I1124 22:17:04.481187 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqnd" event={"ID":"9ac31544-be85-4085-9666-7213b1638074","Type":"ContainerDied","Data":"46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0"} Nov 24 22:17:05 crc kubenswrapper[4767]: I1124 22:17:05.496093 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqnd" event={"ID":"9ac31544-be85-4085-9666-7213b1638074","Type":"ContainerStarted","Data":"15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05"} Nov 24 22:17:05 crc kubenswrapper[4767]: I1124 22:17:05.523630 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fmqnd" podStartSLOduration=3.065664847 podStartE2EDuration="5.523613663s" podCreationTimestamp="2025-11-24 22:17:00 +0000 UTC" firstStartedPulling="2025-11-24 22:17:02.455933513 +0000 UTC m=+2305.372916885" lastFinishedPulling="2025-11-24 22:17:04.913882329 +0000 UTC m=+2307.830865701" observedRunningTime="2025-11-24 22:17:05.516060089 +0000 UTC m=+2308.433043481" watchObservedRunningTime="2025-11-24 22:17:05.523613663 +0000 UTC m=+2308.440597035" Nov 24 22:17:11 crc kubenswrapper[4767]: I1124 22:17:11.252251 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:11 crc kubenswrapper[4767]: I1124 22:17:11.252887 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:11 crc kubenswrapper[4767]: I1124 22:17:11.323253 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:11 crc kubenswrapper[4767]: I1124 22:17:11.600039 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:11 crc kubenswrapper[4767]: I1124 22:17:11.665973 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmqnd"] Nov 24 22:17:13 crc kubenswrapper[4767]: I1124 22:17:13.574231 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fmqnd" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="registry-server" containerID="cri-o://15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05" gracePeriod=2 Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.165442 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.276196 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkxhr\" (UniqueName: \"kubernetes.io/projected/9ac31544-be85-4085-9666-7213b1638074-kube-api-access-lkxhr\") pod \"9ac31544-be85-4085-9666-7213b1638074\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.276411 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-catalog-content\") pod \"9ac31544-be85-4085-9666-7213b1638074\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.276501 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-utilities\") pod \"9ac31544-be85-4085-9666-7213b1638074\" (UID: \"9ac31544-be85-4085-9666-7213b1638074\") " Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.277363 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-utilities" (OuterVolumeSpecName: "utilities") pod "9ac31544-be85-4085-9666-7213b1638074" (UID: "9ac31544-be85-4085-9666-7213b1638074"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.281552 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac31544-be85-4085-9666-7213b1638074-kube-api-access-lkxhr" (OuterVolumeSpecName: "kube-api-access-lkxhr") pod "9ac31544-be85-4085-9666-7213b1638074" (UID: "9ac31544-be85-4085-9666-7213b1638074"). InnerVolumeSpecName "kube-api-access-lkxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.334962 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ac31544-be85-4085-9666-7213b1638074" (UID: "9ac31544-be85-4085-9666-7213b1638074"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.380400 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.380465 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac31544-be85-4085-9666-7213b1638074-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.380482 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkxhr\" (UniqueName: \"kubernetes.io/projected/9ac31544-be85-4085-9666-7213b1638074-kube-api-access-lkxhr\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.585398 4767 generic.go:334] "Generic (PLEG): container finished" podID="9ac31544-be85-4085-9666-7213b1638074" containerID="15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05" exitCode=0 Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.585447 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqnd" event={"ID":"9ac31544-be85-4085-9666-7213b1638074","Type":"ContainerDied","Data":"15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05"} Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.585479 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqnd" event={"ID":"9ac31544-be85-4085-9666-7213b1638074","Type":"ContainerDied","Data":"f35d96e41cc546413f1c643ad90e3212b859212d0bfbb51fab36f1e0475edd6d"} Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.585478 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqnd" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.585501 4767 scope.go:117] "RemoveContainer" containerID="15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.613748 4767 scope.go:117] "RemoveContainer" containerID="46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.630959 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmqnd"] Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.639740 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fmqnd"] Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.640851 4767 scope.go:117] "RemoveContainer" containerID="b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.698698 4767 scope.go:117] "RemoveContainer" containerID="15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05" Nov 24 22:17:14 crc kubenswrapper[4767]: E1124 22:17:14.699137 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05\": container with ID starting with 15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05 not found: ID does not exist" containerID="15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.699188 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05"} err="failed to get container status \"15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05\": rpc error: code = NotFound desc = could not find container \"15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05\": container with ID starting with 15d91a6420d74190fa757ac8ea483906ba09d3132238679071f7c671c4165d05 not found: ID does not exist" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.699222 4767 scope.go:117] "RemoveContainer" containerID="46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0" Nov 24 22:17:14 crc kubenswrapper[4767]: E1124 22:17:14.699554 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0\": container with ID starting with 46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0 not found: ID does not exist" containerID="46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.699576 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0"} err="failed to get container status \"46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0\": rpc error: code = NotFound desc = could not find container \"46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0\": container with ID starting with 46dd3a11960a95b615017ea53c34f1931c8ec0c5f320b78c1e7bbf07bcaf8ad0 not found: ID does not exist" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.699591 4767 scope.go:117] "RemoveContainer" containerID="b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4" Nov 24 22:17:14 crc kubenswrapper[4767]: E1124 22:17:14.700036 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4\": container with ID starting with b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4 not found: ID does not exist" containerID="b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4" Nov 24 22:17:14 crc kubenswrapper[4767]: I1124 22:17:14.700088 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4"} err="failed to get container status \"b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4\": rpc error: code = NotFound desc = could not find container \"b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4\": container with ID starting with b494de1e2dc026595d837ebf4099a9f34042570669db47d5c118282f9047c4f4 not found: ID does not exist" Nov 24 22:17:16 crc kubenswrapper[4767]: I1124 22:17:16.331383 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac31544-be85-4085-9666-7213b1638074" path="/var/lib/kubelet/pods/9ac31544-be85-4085-9666-7213b1638074/volumes" Nov 24 22:17:17 crc kubenswrapper[4767]: I1124 22:17:17.314134 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:17:17 crc kubenswrapper[4767]: E1124 22:17:17.315026 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:17:30 crc kubenswrapper[4767]: I1124 22:17:30.315464 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:17:30 crc kubenswrapper[4767]: E1124 22:17:30.316909 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:17:40 crc kubenswrapper[4767]: I1124 22:17:40.917373 4767 generic.go:334] "Generic (PLEG): container finished" podID="12cea285-00cd-40e4-b751-75563f414f33" containerID="9def48b776eb1417da4f47ecc8f23a14e1f0b91fb4f937d86612de800a3c46b6" exitCode=0 Nov 24 22:17:40 crc kubenswrapper[4767]: I1124 22:17:40.917476 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" event={"ID":"12cea285-00cd-40e4-b751-75563f414f33","Type":"ContainerDied","Data":"9def48b776eb1417da4f47ecc8f23a14e1f0b91fb4f937d86612de800a3c46b6"} Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.372775 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.449741 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-ssh-key\") pod \"12cea285-00cd-40e4-b751-75563f414f33\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.449874 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-combined-ca-bundle\") pod \"12cea285-00cd-40e4-b751-75563f414f33\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.449939 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5t92\" (UniqueName: \"kubernetes.io/projected/12cea285-00cd-40e4-b751-75563f414f33-kube-api-access-v5t92\") pod \"12cea285-00cd-40e4-b751-75563f414f33\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.450084 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-inventory\") pod \"12cea285-00cd-40e4-b751-75563f414f33\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.450135 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-secret-0\") pod \"12cea285-00cd-40e4-b751-75563f414f33\" (UID: \"12cea285-00cd-40e4-b751-75563f414f33\") " Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.456531 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "12cea285-00cd-40e4-b751-75563f414f33" (UID: "12cea285-00cd-40e4-b751-75563f414f33"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.456574 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cea285-00cd-40e4-b751-75563f414f33-kube-api-access-v5t92" (OuterVolumeSpecName: "kube-api-access-v5t92") pod "12cea285-00cd-40e4-b751-75563f414f33" (UID: "12cea285-00cd-40e4-b751-75563f414f33"). InnerVolumeSpecName "kube-api-access-v5t92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.479132 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "12cea285-00cd-40e4-b751-75563f414f33" (UID: "12cea285-00cd-40e4-b751-75563f414f33"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.479518 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-inventory" (OuterVolumeSpecName: "inventory") pod "12cea285-00cd-40e4-b751-75563f414f33" (UID: "12cea285-00cd-40e4-b751-75563f414f33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.484981 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "12cea285-00cd-40e4-b751-75563f414f33" (UID: "12cea285-00cd-40e4-b751-75563f414f33"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.553437 4767 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.553472 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.553484 4767 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.553493 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5t92\" (UniqueName: \"kubernetes.io/projected/12cea285-00cd-40e4-b751-75563f414f33-kube-api-access-v5t92\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.553502 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12cea285-00cd-40e4-b751-75563f414f33-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.937501 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" event={"ID":"12cea285-00cd-40e4-b751-75563f414f33","Type":"ContainerDied","Data":"ca57ad9bfc223bf1d60111d12de9e00e7b9c8b06fbc6e8e5e9a5cf9b9a4023b5"} Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.937539 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca57ad9bfc223bf1d60111d12de9e00e7b9c8b06fbc6e8e5e9a5cf9b9a4023b5" Nov 24 22:17:42 crc kubenswrapper[4767]: I1124 22:17:42.937574 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.021412 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2"] Nov 24 22:17:43 crc kubenswrapper[4767]: E1124 22:17:43.021894 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cea285-00cd-40e4-b751-75563f414f33" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.021919 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cea285-00cd-40e4-b751-75563f414f33" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 22:17:43 crc kubenswrapper[4767]: E1124 22:17:43.021938 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="extract-content" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.021948 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="extract-content" Nov 24 22:17:43 crc kubenswrapper[4767]: E1124 22:17:43.021967 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="registry-server" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.021976 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="registry-server" Nov 24 22:17:43 crc kubenswrapper[4767]: E1124 22:17:43.022005 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="extract-utilities" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.022013 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="extract-utilities" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.022302 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cea285-00cd-40e4-b751-75563f414f33" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.022335 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac31544-be85-4085-9666-7213b1638074" containerName="registry-server" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.023164 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.025378 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.026803 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.027039 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.027196 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.027367 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.029860 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.043090 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.084138 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2"] Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.169651 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.169714 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.169767 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.169809 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmt2\" (UniqueName: \"kubernetes.io/projected/4939e57b-c314-4065-a96f-e111bd32f3e2-kube-api-access-8lmt2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.169857 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.169896 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.169930 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.170014 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.170050 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.272096 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.272171 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.272253 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.272321 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.272460 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.273180 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.273346 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lmt2\" (UniqueName: \"kubernetes.io/projected/4939e57b-c314-4065-a96f-e111bd32f3e2-kube-api-access-8lmt2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.273867 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.274377 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.274418 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.276383 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.276415 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.276388 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.276842 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.278246 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.278805 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.279707 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.291333 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lmt2\" (UniqueName: \"kubernetes.io/projected/4939e57b-c314-4065-a96f-e111bd32f3e2-kube-api-access-8lmt2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hw2k2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.344991 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.908558 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2"] Nov 24 22:17:43 crc kubenswrapper[4767]: I1124 22:17:43.947994 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" event={"ID":"4939e57b-c314-4065-a96f-e111bd32f3e2","Type":"ContainerStarted","Data":"1d1266ab3c957a0df9c2e56363266d17993f8f163991a1c8a40d3e1b676aef4f"} Nov 24 22:17:44 crc kubenswrapper[4767]: I1124 22:17:44.960984 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" event={"ID":"4939e57b-c314-4065-a96f-e111bd32f3e2","Type":"ContainerStarted","Data":"8e21570f3f3c9ccd4cb16873f4e4c1e7455c4dd32026ef63442df2bbc00cf8a7"} Nov 24 22:17:44 crc kubenswrapper[4767]: I1124 22:17:44.994745 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" podStartSLOduration=1.318591602 podStartE2EDuration="1.994720838s" podCreationTimestamp="2025-11-24 22:17:43 +0000 UTC" firstStartedPulling="2025-11-24 22:17:43.899969745 +0000 UTC m=+2346.816953137" lastFinishedPulling="2025-11-24 22:17:44.576099001 +0000 UTC m=+2347.493082373" observedRunningTime="2025-11-24 22:17:44.983524061 +0000 UTC m=+2347.900507473" watchObservedRunningTime="2025-11-24 22:17:44.994720838 +0000 UTC m=+2347.911704200" Nov 24 22:17:45 crc kubenswrapper[4767]: I1124 22:17:45.314261 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:17:45 crc kubenswrapper[4767]: E1124 22:17:45.315316 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:17:56 crc kubenswrapper[4767]: I1124 22:17:56.313649 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:17:56 crc kubenswrapper[4767]: E1124 22:17:56.314407 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:18:07 crc kubenswrapper[4767]: I1124 22:18:07.313530 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:18:07 crc kubenswrapper[4767]: E1124 22:18:07.314669 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:18:22 crc kubenswrapper[4767]: I1124 22:18:22.315026 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:18:22 crc kubenswrapper[4767]: E1124 22:18:22.316324 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:18:36 crc kubenswrapper[4767]: I1124 22:18:36.314156 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:18:36 crc kubenswrapper[4767]: E1124 22:18:36.315161 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:18:47 crc kubenswrapper[4767]: I1124 22:18:47.314222 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:18:47 crc kubenswrapper[4767]: E1124 22:18:47.315015 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:19:01 crc kubenswrapper[4767]: I1124 22:19:01.313081 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:19:01 crc kubenswrapper[4767]: E1124 22:19:01.313814 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:19:16 crc kubenswrapper[4767]: I1124 22:19:16.315482 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:19:16 crc kubenswrapper[4767]: E1124 22:19:16.316486 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:19:27 crc kubenswrapper[4767]: I1124 22:19:27.313483 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:19:27 crc kubenswrapper[4767]: E1124 22:19:27.314263 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:19:38 crc kubenswrapper[4767]: I1124 22:19:38.320089 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:19:38 crc kubenswrapper[4767]: E1124 22:19:38.321071 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:19:49 crc kubenswrapper[4767]: I1124 22:19:49.313885 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:19:49 crc kubenswrapper[4767]: E1124 22:19:49.315045 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:20:03 crc kubenswrapper[4767]: I1124 22:20:03.314019 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:20:03 crc kubenswrapper[4767]: E1124 22:20:03.314770 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:20:18 crc kubenswrapper[4767]: I1124 22:20:18.320424 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:20:18 crc kubenswrapper[4767]: I1124 22:20:18.689336 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"dad6703894db8aa762877a6ddc6ca95faefcc00fa6123271efb16deedb6c3b70"} Nov 24 22:20:56 crc kubenswrapper[4767]: I1124 22:20:56.066150 4767 generic.go:334] "Generic (PLEG): container finished" podID="4939e57b-c314-4065-a96f-e111bd32f3e2" containerID="8e21570f3f3c9ccd4cb16873f4e4c1e7455c4dd32026ef63442df2bbc00cf8a7" exitCode=0 Nov 24 22:20:56 crc kubenswrapper[4767]: I1124 22:20:56.066237 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" event={"ID":"4939e57b-c314-4065-a96f-e111bd32f3e2","Type":"ContainerDied","Data":"8e21570f3f3c9ccd4cb16873f4e4c1e7455c4dd32026ef63442df2bbc00cf8a7"} Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.515160 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640399 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-combined-ca-bundle\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640458 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-1\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640515 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-extra-config-0\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640539 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-ssh-key\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640610 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lmt2\" (UniqueName: \"kubernetes.io/projected/4939e57b-c314-4065-a96f-e111bd32f3e2-kube-api-access-8lmt2\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640675 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-0\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640736 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-inventory\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640775 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-0\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.640800 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-1\") pod \"4939e57b-c314-4065-a96f-e111bd32f3e2\" (UID: \"4939e57b-c314-4065-a96f-e111bd32f3e2\") " Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.648049 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.650535 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4939e57b-c314-4065-a96f-e111bd32f3e2-kube-api-access-8lmt2" (OuterVolumeSpecName: "kube-api-access-8lmt2") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "kube-api-access-8lmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.674137 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.678112 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.678251 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.682038 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-inventory" (OuterVolumeSpecName: "inventory") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.682806 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.693125 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.695778 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4939e57b-c314-4065-a96f-e111bd32f3e2" (UID: "4939e57b-c314-4065-a96f-e111bd32f3e2"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743765 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743818 4767 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743838 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lmt2\" (UniqueName: \"kubernetes.io/projected/4939e57b-c314-4065-a96f-e111bd32f3e2-kube-api-access-8lmt2\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743856 4767 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743872 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743892 4767 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743912 4767 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743929 4767 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:57 crc kubenswrapper[4767]: I1124 22:20:57.743948 4767 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4939e57b-c314-4065-a96f-e111bd32f3e2-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.089359 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" event={"ID":"4939e57b-c314-4065-a96f-e111bd32f3e2","Type":"ContainerDied","Data":"1d1266ab3c957a0df9c2e56363266d17993f8f163991a1c8a40d3e1b676aef4f"} Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.089798 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d1266ab3c957a0df9c2e56363266d17993f8f163991a1c8a40d3e1b676aef4f" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.089500 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hw2k2" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.195986 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt"] Nov 24 22:20:58 crc kubenswrapper[4767]: E1124 22:20:58.196458 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4939e57b-c314-4065-a96f-e111bd32f3e2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.196474 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4939e57b-c314-4065-a96f-e111bd32f3e2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.196687 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4939e57b-c314-4065-a96f-e111bd32f3e2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.197462 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.199993 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.200128 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.200251 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.200776 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2vhxm" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.201588 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.207215 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt"] Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.355894 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.356029 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.356077 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.356145 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.356291 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnmf\" (UniqueName: \"kubernetes.io/projected/4712a89f-30ee-4a70-99f4-8765c454f318-kube-api-access-dsnmf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.356341 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.356479 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.457926 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.458008 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.458096 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.458149 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.458178 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.458228 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.458251 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsnmf\" (UniqueName: \"kubernetes.io/projected/4712a89f-30ee-4a70-99f4-8765c454f318-kube-api-access-dsnmf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.462818 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.463392 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.463673 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.463787 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.463897 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.464051 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.479135 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsnmf\" (UniqueName: \"kubernetes.io/projected/4712a89f-30ee-4a70-99f4-8765c454f318-kube-api-access-dsnmf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:58 crc kubenswrapper[4767]: I1124 22:20:58.516725 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:20:59 crc kubenswrapper[4767]: I1124 22:20:59.057057 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt"] Nov 24 22:20:59 crc kubenswrapper[4767]: I1124 22:20:59.101776 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" event={"ID":"4712a89f-30ee-4a70-99f4-8765c454f318","Type":"ContainerStarted","Data":"3da16dc124ea7511cc4ae7f2d327994dca7d5f7bda2ed11208250bbb86df6447"} Nov 24 22:21:00 crc kubenswrapper[4767]: I1124 22:21:00.114423 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" event={"ID":"4712a89f-30ee-4a70-99f4-8765c454f318","Type":"ContainerStarted","Data":"4bfe89a2d0cb6c19182936a8efdd7492fbdf2224ac737c721c47dad1caad9125"} Nov 24 22:21:00 crc kubenswrapper[4767]: I1124 22:21:00.141691 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" podStartSLOduration=1.702235619 podStartE2EDuration="2.141673896s" podCreationTimestamp="2025-11-24 22:20:58 +0000 UTC" firstStartedPulling="2025-11-24 22:20:59.067493177 +0000 UTC m=+2541.984476559" lastFinishedPulling="2025-11-24 22:20:59.506931464 +0000 UTC m=+2542.423914836" observedRunningTime="2025-11-24 22:21:00.136627793 +0000 UTC m=+2543.053611225" watchObservedRunningTime="2025-11-24 22:21:00.141673896 +0000 UTC m=+2543.058657268" Nov 24 22:22:35 crc kubenswrapper[4767]: I1124 22:22:35.481849 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:22:35 crc kubenswrapper[4767]: I1124 22:22:35.482529 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:23:05 crc kubenswrapper[4767]: I1124 22:23:05.481306 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:23:05 crc kubenswrapper[4767]: I1124 22:23:05.482111 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:23:24 crc kubenswrapper[4767]: I1124 22:23:24.662453 4767 generic.go:334] "Generic (PLEG): container finished" podID="4712a89f-30ee-4a70-99f4-8765c454f318" containerID="4bfe89a2d0cb6c19182936a8efdd7492fbdf2224ac737c721c47dad1caad9125" exitCode=0 Nov 24 22:23:24 crc kubenswrapper[4767]: I1124 22:23:24.662566 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" event={"ID":"4712a89f-30ee-4a70-99f4-8765c454f318","Type":"ContainerDied","Data":"4bfe89a2d0cb6c19182936a8efdd7492fbdf2224ac737c721c47dad1caad9125"} Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.079179 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.179517 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsnmf\" (UniqueName: \"kubernetes.io/projected/4712a89f-30ee-4a70-99f4-8765c454f318-kube-api-access-dsnmf\") pod \"4712a89f-30ee-4a70-99f4-8765c454f318\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.179677 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-2\") pod \"4712a89f-30ee-4a70-99f4-8765c454f318\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.179769 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-telemetry-combined-ca-bundle\") pod \"4712a89f-30ee-4a70-99f4-8765c454f318\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.179885 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ssh-key\") pod \"4712a89f-30ee-4a70-99f4-8765c454f318\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.179939 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-1\") pod \"4712a89f-30ee-4a70-99f4-8765c454f318\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.179976 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-0\") pod \"4712a89f-30ee-4a70-99f4-8765c454f318\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.180108 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-inventory\") pod \"4712a89f-30ee-4a70-99f4-8765c454f318\" (UID: \"4712a89f-30ee-4a70-99f4-8765c454f318\") " Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.187718 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4712a89f-30ee-4a70-99f4-8765c454f318-kube-api-access-dsnmf" (OuterVolumeSpecName: "kube-api-access-dsnmf") pod "4712a89f-30ee-4a70-99f4-8765c454f318" (UID: "4712a89f-30ee-4a70-99f4-8765c454f318"). InnerVolumeSpecName "kube-api-access-dsnmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.187910 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4712a89f-30ee-4a70-99f4-8765c454f318" (UID: "4712a89f-30ee-4a70-99f4-8765c454f318"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.212652 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4712a89f-30ee-4a70-99f4-8765c454f318" (UID: "4712a89f-30ee-4a70-99f4-8765c454f318"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.219587 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4712a89f-30ee-4a70-99f4-8765c454f318" (UID: "4712a89f-30ee-4a70-99f4-8765c454f318"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.237743 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-inventory" (OuterVolumeSpecName: "inventory") pod "4712a89f-30ee-4a70-99f4-8765c454f318" (UID: "4712a89f-30ee-4a70-99f4-8765c454f318"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.240593 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4712a89f-30ee-4a70-99f4-8765c454f318" (UID: "4712a89f-30ee-4a70-99f4-8765c454f318"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.240987 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4712a89f-30ee-4a70-99f4-8765c454f318" (UID: "4712a89f-30ee-4a70-99f4-8765c454f318"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.284149 4767 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.284199 4767 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.284215 4767 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.284226 4767 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.284238 4767 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.284251 4767 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4712a89f-30ee-4a70-99f4-8765c454f318-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.284263 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsnmf\" (UniqueName: \"kubernetes.io/projected/4712a89f-30ee-4a70-99f4-8765c454f318-kube-api-access-dsnmf\") on node \"crc\" DevicePath \"\"" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.686567 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" event={"ID":"4712a89f-30ee-4a70-99f4-8765c454f318","Type":"ContainerDied","Data":"3da16dc124ea7511cc4ae7f2d327994dca7d5f7bda2ed11208250bbb86df6447"} Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.686994 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da16dc124ea7511cc4ae7f2d327994dca7d5f7bda2ed11208250bbb86df6447" Nov 24 22:23:26 crc kubenswrapper[4767]: I1124 22:23:26.686630 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt" Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.482150 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.483087 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.483183 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.484611 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dad6703894db8aa762877a6ddc6ca95faefcc00fa6123271efb16deedb6c3b70"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.484739 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://dad6703894db8aa762877a6ddc6ca95faefcc00fa6123271efb16deedb6c3b70" gracePeriod=600 Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.800467 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="dad6703894db8aa762877a6ddc6ca95faefcc00fa6123271efb16deedb6c3b70" exitCode=0 Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.800522 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"dad6703894db8aa762877a6ddc6ca95faefcc00fa6123271efb16deedb6c3b70"} Nov 24 22:23:35 crc kubenswrapper[4767]: I1124 22:23:35.800764 4767 scope.go:117] "RemoveContainer" containerID="6db704bb5fb005f1a2112feea36d5949360bb98cd67e89b20e7689ab94c9dd7d" Nov 24 22:23:36 crc kubenswrapper[4767]: I1124 22:23:36.815733 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7"} Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.587767 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dk69f"] Nov 24 22:23:56 crc kubenswrapper[4767]: E1124 22:23:56.589096 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4712a89f-30ee-4a70-99f4-8765c454f318" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.589121 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4712a89f-30ee-4a70-99f4-8765c454f318" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.589578 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4712a89f-30ee-4a70-99f4-8765c454f318" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.592439 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.615233 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk69f"] Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.763400 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-utilities\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.763760 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-catalog-content\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.763939 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtvg\" (UniqueName: \"kubernetes.io/projected/255d133f-2de5-4b7d-a1dc-9091d0bd6580-kube-api-access-dxtvg\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.866470 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-catalog-content\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.866539 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxtvg\" (UniqueName: \"kubernetes.io/projected/255d133f-2de5-4b7d-a1dc-9091d0bd6580-kube-api-access-dxtvg\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.866683 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-utilities\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.867589 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-utilities\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.867746 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-catalog-content\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.897286 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxtvg\" (UniqueName: \"kubernetes.io/projected/255d133f-2de5-4b7d-a1dc-9091d0bd6580-kube-api-access-dxtvg\") pod \"certified-operators-dk69f\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:56 crc kubenswrapper[4767]: I1124 22:23:56.927032 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:23:57 crc kubenswrapper[4767]: I1124 22:23:57.412461 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk69f"] Nov 24 22:23:58 crc kubenswrapper[4767]: I1124 22:23:58.052421 4767 generic.go:334] "Generic (PLEG): container finished" podID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerID="b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a" exitCode=0 Nov 24 22:23:58 crc kubenswrapper[4767]: I1124 22:23:58.052642 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk69f" event={"ID":"255d133f-2de5-4b7d-a1dc-9091d0bd6580","Type":"ContainerDied","Data":"b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a"} Nov 24 22:23:58 crc kubenswrapper[4767]: I1124 22:23:58.053072 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk69f" event={"ID":"255d133f-2de5-4b7d-a1dc-9091d0bd6580","Type":"ContainerStarted","Data":"d290f116276b57c6e92c9b56d3158b8a180a8fde038df5f74cf89d6f441471f8"} Nov 24 22:23:58 crc kubenswrapper[4767]: I1124 22:23:58.055936 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:24:02 crc kubenswrapper[4767]: I1124 22:24:02.095807 4767 generic.go:334] "Generic (PLEG): container finished" podID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerID="30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4" exitCode=0 Nov 24 22:24:02 crc kubenswrapper[4767]: I1124 22:24:02.095959 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk69f" event={"ID":"255d133f-2de5-4b7d-a1dc-9091d0bd6580","Type":"ContainerDied","Data":"30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4"} Nov 24 22:24:03 crc kubenswrapper[4767]: I1124 22:24:03.113060 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk69f" event={"ID":"255d133f-2de5-4b7d-a1dc-9091d0bd6580","Type":"ContainerStarted","Data":"9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2"} Nov 24 22:24:03 crc kubenswrapper[4767]: I1124 22:24:03.138291 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dk69f" podStartSLOduration=2.608516756 podStartE2EDuration="7.138253562s" podCreationTimestamp="2025-11-24 22:23:56 +0000 UTC" firstStartedPulling="2025-11-24 22:23:58.055371987 +0000 UTC m=+2720.972355399" lastFinishedPulling="2025-11-24 22:24:02.585108823 +0000 UTC m=+2725.502092205" observedRunningTime="2025-11-24 22:24:03.13217358 +0000 UTC m=+2726.049156952" watchObservedRunningTime="2025-11-24 22:24:03.138253562 +0000 UTC m=+2726.055236934" Nov 24 22:24:04 crc kubenswrapper[4767]: I1124 22:24:04.269144 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 22:24:04 crc kubenswrapper[4767]: I1124 22:24:04.269896 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="prometheus" containerID="cri-o://b93452ecdbfd84b4d4056576486ff2145ebda4de665946cda363b626a451c53a" gracePeriod=600 Nov 24 22:24:04 crc kubenswrapper[4767]: I1124 22:24:04.270054 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="thanos-sidecar" containerID="cri-o://5350731ca94ea60bc9fa4513e771a8c6cf594106b7d5a5fc485d8d0244564dc6" gracePeriod=600 Nov 24 22:24:04 crc kubenswrapper[4767]: I1124 22:24:04.270104 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="config-reloader" containerID="cri-o://a219ea07fcd3fd0e5a8b0567916bd7ae58e89018793fff91b39baa82fff1e6b0" gracePeriod=600 Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.134769 4767 generic.go:334] "Generic (PLEG): container finished" podID="825cb17a-68e9-412d-829f-88001f53782c" containerID="5350731ca94ea60bc9fa4513e771a8c6cf594106b7d5a5fc485d8d0244564dc6" exitCode=0 Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.135144 4767 generic.go:334] "Generic (PLEG): container finished" podID="825cb17a-68e9-412d-829f-88001f53782c" containerID="a219ea07fcd3fd0e5a8b0567916bd7ae58e89018793fff91b39baa82fff1e6b0" exitCode=0 Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.135157 4767 generic.go:334] "Generic (PLEG): container finished" podID="825cb17a-68e9-412d-829f-88001f53782c" containerID="b93452ecdbfd84b4d4056576486ff2145ebda4de665946cda363b626a451c53a" exitCode=0 Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.134858 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerDied","Data":"5350731ca94ea60bc9fa4513e771a8c6cf594106b7d5a5fc485d8d0244564dc6"} Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.135196 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerDied","Data":"a219ea07fcd3fd0e5a8b0567916bd7ae58e89018793fff91b39baa82fff1e6b0"} Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.135213 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerDied","Data":"b93452ecdbfd84b4d4056576486ff2145ebda4de665946cda363b626a451c53a"} Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.249288 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.367611 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369392 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-config\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369465 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369538 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-thanos-prometheus-http-client-file\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369583 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm8ts\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-kube-api-access-dm8ts\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369736 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/825cb17a-68e9-412d-829f-88001f53782c-config-out\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369794 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369829 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/825cb17a-68e9-412d-829f-88001f53782c-prometheus-metric-storage-rulefiles-0\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369908 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-tls-assets\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369959 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.369996 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-secret-combined-ca-bundle\") pod \"825cb17a-68e9-412d-829f-88001f53782c\" (UID: \"825cb17a-68e9-412d-829f-88001f53782c\") " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.370725 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825cb17a-68e9-412d-829f-88001f53782c-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.372713 4767 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/825cb17a-68e9-412d-829f-88001f53782c-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.377782 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-kube-api-access-dm8ts" (OuterVolumeSpecName: "kube-api-access-dm8ts") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "kube-api-access-dm8ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.377805 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.378086 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-config" (OuterVolumeSpecName: "config") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.378083 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825cb17a-68e9-412d-829f-88001f53782c-config-out" (OuterVolumeSpecName: "config-out") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.379442 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.380216 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.381387 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.383337 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.400965 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "pvc-71f905aa-f502-4da2-b361-dd72fb27e489". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.464138 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config" (OuterVolumeSpecName: "web-config") pod "825cb17a-68e9-412d-829f-88001f53782c" (UID: "825cb17a-68e9-412d-829f-88001f53782c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474290 4767 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474330 4767 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474345 4767 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474373 4767 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") on node \"crc\" " Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474386 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-config\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474395 4767 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474404 4767 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474413 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm8ts\" (UniqueName: \"kubernetes.io/projected/825cb17a-68e9-412d-829f-88001f53782c-kube-api-access-dm8ts\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474422 4767 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/825cb17a-68e9-412d-829f-88001f53782c-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.474431 4767 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/825cb17a-68e9-412d-829f-88001f53782c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.504602 4767 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.505382 4767 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-71f905aa-f502-4da2-b361-dd72fb27e489" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489") on node "crc" Nov 24 22:24:05 crc kubenswrapper[4767]: I1124 22:24:05.576680 4767 reconciler_common.go:293] "Volume detached for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.148068 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"825cb17a-68e9-412d-829f-88001f53782c","Type":"ContainerDied","Data":"7b3fd4452055af52d38f4fc7a0317dc004c3938640399d24e65b842f104b2336"} Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.148136 4767 scope.go:117] "RemoveContainer" containerID="5350731ca94ea60bc9fa4513e771a8c6cf594106b7d5a5fc485d8d0244564dc6" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.148140 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.174702 4767 scope.go:117] "RemoveContainer" containerID="a219ea07fcd3fd0e5a8b0567916bd7ae58e89018793fff91b39baa82fff1e6b0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.183044 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.193050 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.199958 4767 scope.go:117] "RemoveContainer" containerID="b93452ecdbfd84b4d4056576486ff2145ebda4de665946cda363b626a451c53a" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.217405 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 22:24:06 crc kubenswrapper[4767]: E1124 22:24:06.217920 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="config-reloader" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.217941 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="config-reloader" Nov 24 22:24:06 crc kubenswrapper[4767]: E1124 22:24:06.217965 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="init-config-reloader" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.217974 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="init-config-reloader" Nov 24 22:24:06 crc kubenswrapper[4767]: E1124 22:24:06.217995 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="prometheus" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.218002 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="prometheus" Nov 24 22:24:06 crc kubenswrapper[4767]: E1124 22:24:06.218026 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="thanos-sidecar" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.218033 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="thanos-sidecar" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.218194 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="thanos-sidecar" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.218220 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="config-reloader" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.218232 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="prometheus" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.219985 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.223251 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-mmxtp" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.223258 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.227232 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.227616 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.231538 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.227639 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.237996 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.240412 4767 scope.go:117] "RemoveContainer" containerID="387ec5927bfb3e773b99fea7bd24a3cffb7e069f3f48032c3a149150d7a6bdc1" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.325291 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825cb17a-68e9-412d-829f-88001f53782c" path="/var/lib/kubelet/pods/825cb17a-68e9-412d-829f-88001f53782c/volumes" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394149 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394247 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkxl\" (UniqueName: \"kubernetes.io/projected/76349a53-1d05-411f-9af2-0833bc0667b1-kube-api-access-bnkxl\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394329 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-config\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394367 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76349a53-1d05-411f-9af2-0833bc0667b1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394498 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394569 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76349a53-1d05-411f-9af2-0833bc0667b1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394592 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394627 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394818 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394901 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.394961 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76349a53-1d05-411f-9af2-0833bc0667b1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496327 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76349a53-1d05-411f-9af2-0833bc0667b1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496373 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496415 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496478 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496512 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496547 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76349a53-1d05-411f-9af2-0833bc0667b1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496607 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496655 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnkxl\" (UniqueName: \"kubernetes.io/projected/76349a53-1d05-411f-9af2-0833bc0667b1-kube-api-access-bnkxl\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496696 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-config\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496723 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76349a53-1d05-411f-9af2-0833bc0667b1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.496782 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.498021 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76349a53-1d05-411f-9af2-0833bc0667b1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.500074 4767 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.500317 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b4c963982fee8444440b339c0b04b674e3a0c1d34dde87d25887f0d341e5df1/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.501223 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76349a53-1d05-411f-9af2-0833bc0667b1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.501800 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.502242 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.504180 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.504306 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-config\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.504869 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.505297 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76349a53-1d05-411f-9af2-0833bc0667b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.507262 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76349a53-1d05-411f-9af2-0833bc0667b1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.518285 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnkxl\" (UniqueName: \"kubernetes.io/projected/76349a53-1d05-411f-9af2-0833bc0667b1-kube-api-access-bnkxl\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.551625 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71f905aa-f502-4da2-b361-dd72fb27e489\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71f905aa-f502-4da2-b361-dd72fb27e489\") pod \"prometheus-metric-storage-0\" (UID: \"76349a53-1d05-411f-9af2-0833bc0667b1\") " pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.837231 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.927650 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.927738 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:24:06 crc kubenswrapper[4767]: I1124 22:24:06.983755 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:24:07 crc kubenswrapper[4767]: I1124 22:24:07.205109 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 22:24:07 crc kubenswrapper[4767]: I1124 22:24:07.269052 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk69f"] Nov 24 22:24:07 crc kubenswrapper[4767]: I1124 22:24:07.314839 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z47p4"] Nov 24 22:24:07 crc kubenswrapper[4767]: I1124 22:24:07.315176 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z47p4" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="registry-server" containerID="cri-o://920689bb359b06a5904af46e9450ada1b12c954b5605ab1172cb5432c7b72117" gracePeriod=2 Nov 24 22:24:07 crc kubenswrapper[4767]: I1124 22:24:07.355851 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 22:24:08 crc kubenswrapper[4767]: I1124 22:24:08.173252 4767 generic.go:334] "Generic (PLEG): container finished" podID="5e75c583-394f-42dd-84df-0dd865218112" containerID="920689bb359b06a5904af46e9450ada1b12c954b5605ab1172cb5432c7b72117" exitCode=0 Nov 24 22:24:08 crc kubenswrapper[4767]: I1124 22:24:08.173300 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47p4" event={"ID":"5e75c583-394f-42dd-84df-0dd865218112","Type":"ContainerDied","Data":"920689bb359b06a5904af46e9450ada1b12c954b5605ab1172cb5432c7b72117"} Nov 24 22:24:08 crc kubenswrapper[4767]: I1124 22:24:08.176886 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76349a53-1d05-411f-9af2-0833bc0667b1","Type":"ContainerStarted","Data":"6c463ae75197f5969a26a1f530ff1ea5ce9ed8e16a13357ef4e8421463a22efa"} Nov 24 22:24:08 crc kubenswrapper[4767]: I1124 22:24:08.244415 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="825cb17a-68e9-412d-829f-88001f53782c" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.142:9090/-/ready\": dial tcp 10.217.0.142:9090: i/o timeout" Nov 24 22:24:08 crc kubenswrapper[4767]: I1124 22:24:08.882472 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.049825 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-utilities\") pod \"5e75c583-394f-42dd-84df-0dd865218112\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.049928 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scm8k\" (UniqueName: \"kubernetes.io/projected/5e75c583-394f-42dd-84df-0dd865218112-kube-api-access-scm8k\") pod \"5e75c583-394f-42dd-84df-0dd865218112\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.049969 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-catalog-content\") pod \"5e75c583-394f-42dd-84df-0dd865218112\" (UID: \"5e75c583-394f-42dd-84df-0dd865218112\") " Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.051904 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-utilities" (OuterVolumeSpecName: "utilities") pod "5e75c583-394f-42dd-84df-0dd865218112" (UID: "5e75c583-394f-42dd-84df-0dd865218112"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.058452 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e75c583-394f-42dd-84df-0dd865218112-kube-api-access-scm8k" (OuterVolumeSpecName: "kube-api-access-scm8k") pod "5e75c583-394f-42dd-84df-0dd865218112" (UID: "5e75c583-394f-42dd-84df-0dd865218112"). InnerVolumeSpecName "kube-api-access-scm8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.134617 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e75c583-394f-42dd-84df-0dd865218112" (UID: "5e75c583-394f-42dd-84df-0dd865218112"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.153027 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.153069 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scm8k\" (UniqueName: \"kubernetes.io/projected/5e75c583-394f-42dd-84df-0dd865218112-kube-api-access-scm8k\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.153084 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e75c583-394f-42dd-84df-0dd865218112-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.186119 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47p4" event={"ID":"5e75c583-394f-42dd-84df-0dd865218112","Type":"ContainerDied","Data":"de4a2e46ff58562f16236683fed5e7037a47a021288acd1a39390cb5a6082667"} Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.186195 4767 scope.go:117] "RemoveContainer" containerID="920689bb359b06a5904af46e9450ada1b12c954b5605ab1172cb5432c7b72117" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.186191 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47p4" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.215420 4767 scope.go:117] "RemoveContainer" containerID="617347b765e96db12400cb54518775605ef85755c9267eef6837c3893e380a5c" Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.234032 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z47p4"] Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.244216 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z47p4"] Nov 24 22:24:09 crc kubenswrapper[4767]: I1124 22:24:09.245577 4767 scope.go:117] "RemoveContainer" containerID="4cbcfed91939f860474880c01edfed717207d36e3b6c48d04628d38434a2ff12" Nov 24 22:24:10 crc kubenswrapper[4767]: I1124 22:24:10.340325 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e75c583-394f-42dd-84df-0dd865218112" path="/var/lib/kubelet/pods/5e75c583-394f-42dd-84df-0dd865218112/volumes" Nov 24 22:24:11 crc kubenswrapper[4767]: I1124 22:24:11.230927 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76349a53-1d05-411f-9af2-0833bc0667b1","Type":"ContainerStarted","Data":"a2b821a3c5dbf7c0a2eebcdc3d6a257e37bda165812b8dbb14903c8dd454476f"} Nov 24 22:24:20 crc kubenswrapper[4767]: I1124 22:24:20.346919 4767 generic.go:334] "Generic (PLEG): container finished" podID="76349a53-1d05-411f-9af2-0833bc0667b1" containerID="a2b821a3c5dbf7c0a2eebcdc3d6a257e37bda165812b8dbb14903c8dd454476f" exitCode=0 Nov 24 22:24:20 crc kubenswrapper[4767]: I1124 22:24:20.347026 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76349a53-1d05-411f-9af2-0833bc0667b1","Type":"ContainerDied","Data":"a2b821a3c5dbf7c0a2eebcdc3d6a257e37bda165812b8dbb14903c8dd454476f"} Nov 24 22:24:21 crc kubenswrapper[4767]: I1124 22:24:21.360402 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76349a53-1d05-411f-9af2-0833bc0667b1","Type":"ContainerStarted","Data":"36d40a0a75deb41b5a43afbe39ef672f0829f66296193f3b89a1713a23bace17"} Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.216015 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nvtfg"] Nov 24 22:24:22 crc kubenswrapper[4767]: E1124 22:24:22.216735 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="registry-server" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.216760 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="registry-server" Nov 24 22:24:22 crc kubenswrapper[4767]: E1124 22:24:22.216794 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="extract-utilities" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.216803 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="extract-utilities" Nov 24 22:24:22 crc kubenswrapper[4767]: E1124 22:24:22.216814 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="extract-content" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.216822 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="extract-content" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.217064 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e75c583-394f-42dd-84df-0dd865218112" containerName="registry-server" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.218775 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.255957 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvtfg"] Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.388774 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-utilities\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.388825 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f969q\" (UniqueName: \"kubernetes.io/projected/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-kube-api-access-f969q\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.389146 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-catalog-content\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.491601 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-catalog-content\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.491730 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-utilities\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.491758 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f969q\" (UniqueName: \"kubernetes.io/projected/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-kube-api-access-f969q\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.492777 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-catalog-content\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.492910 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-utilities\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.522701 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f969q\" (UniqueName: \"kubernetes.io/projected/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-kube-api-access-f969q\") pod \"redhat-operators-nvtfg\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.549972 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:22 crc kubenswrapper[4767]: I1124 22:24:22.995953 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvtfg"] Nov 24 22:24:23 crc kubenswrapper[4767]: I1124 22:24:23.380425 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvtfg" event={"ID":"ad4658a5-0307-4ef6-a3c1-f30dca3b6372","Type":"ContainerStarted","Data":"1791af8d529865ed25451d10d587032b9474511d4ad1f62b8274cee968c7be5b"} Nov 24 22:24:24 crc kubenswrapper[4767]: I1124 22:24:24.395671 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76349a53-1d05-411f-9af2-0833bc0667b1","Type":"ContainerStarted","Data":"95e64d24bca0e2e99616ffb07446784fb76970a0093b26db8444ef805c7a54eb"} Nov 24 22:24:24 crc kubenswrapper[4767]: I1124 22:24:24.401660 4767 generic.go:334] "Generic (PLEG): container finished" podID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerID="9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b" exitCode=0 Nov 24 22:24:24 crc kubenswrapper[4767]: I1124 22:24:24.401775 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvtfg" event={"ID":"ad4658a5-0307-4ef6-a3c1-f30dca3b6372","Type":"ContainerDied","Data":"9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b"} Nov 24 22:24:25 crc kubenswrapper[4767]: I1124 22:24:25.412261 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvtfg" event={"ID":"ad4658a5-0307-4ef6-a3c1-f30dca3b6372","Type":"ContainerStarted","Data":"ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c"} Nov 24 22:24:25 crc kubenswrapper[4767]: I1124 22:24:25.416058 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76349a53-1d05-411f-9af2-0833bc0667b1","Type":"ContainerStarted","Data":"8fc95462fdd3a93672ad6c02c49a2f31e6cc564a638636b1727b21d9fb8374c3"} Nov 24 22:24:25 crc kubenswrapper[4767]: I1124 22:24:25.494843 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.494824629 podStartE2EDuration="19.494824629s" podCreationTimestamp="2025-11-24 22:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:24:25.49344288 +0000 UTC m=+2748.410426292" watchObservedRunningTime="2025-11-24 22:24:25.494824629 +0000 UTC m=+2748.411808001" Nov 24 22:24:26 crc kubenswrapper[4767]: I1124 22:24:26.430645 4767 generic.go:334] "Generic (PLEG): container finished" podID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerID="ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c" exitCode=0 Nov 24 22:24:26 crc kubenswrapper[4767]: I1124 22:24:26.430716 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvtfg" event={"ID":"ad4658a5-0307-4ef6-a3c1-f30dca3b6372","Type":"ContainerDied","Data":"ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c"} Nov 24 22:24:26 crc kubenswrapper[4767]: I1124 22:24:26.838051 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:28 crc kubenswrapper[4767]: I1124 22:24:28.469295 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvtfg" event={"ID":"ad4658a5-0307-4ef6-a3c1-f30dca3b6372","Type":"ContainerStarted","Data":"3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95"} Nov 24 22:24:28 crc kubenswrapper[4767]: I1124 22:24:28.492810 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nvtfg" podStartSLOduration=3.677189011 podStartE2EDuration="6.49279401s" podCreationTimestamp="2025-11-24 22:24:22 +0000 UTC" firstStartedPulling="2025-11-24 22:24:24.403643858 +0000 UTC m=+2747.320627230" lastFinishedPulling="2025-11-24 22:24:27.219248857 +0000 UTC m=+2750.136232229" observedRunningTime="2025-11-24 22:24:28.489114036 +0000 UTC m=+2751.406097408" watchObservedRunningTime="2025-11-24 22:24:28.49279401 +0000 UTC m=+2751.409777382" Nov 24 22:24:32 crc kubenswrapper[4767]: I1124 22:24:32.550842 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:32 crc kubenswrapper[4767]: I1124 22:24:32.551568 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:33 crc kubenswrapper[4767]: I1124 22:24:33.643218 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvtfg" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="registry-server" probeResult="failure" output=< Nov 24 22:24:33 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 22:24:33 crc kubenswrapper[4767]: > Nov 24 22:24:36 crc kubenswrapper[4767]: I1124 22:24:36.838349 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:36 crc kubenswrapper[4767]: I1124 22:24:36.854827 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:37 crc kubenswrapper[4767]: I1124 22:24:37.576314 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 22:24:42 crc kubenswrapper[4767]: I1124 22:24:42.654793 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:42 crc kubenswrapper[4767]: I1124 22:24:42.728356 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:42 crc kubenswrapper[4767]: I1124 22:24:42.903410 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvtfg"] Nov 24 22:24:44 crc kubenswrapper[4767]: I1124 22:24:44.663300 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nvtfg" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="registry-server" containerID="cri-o://3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95" gracePeriod=2 Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.157414 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.205438 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-utilities\") pod \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.205519 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-catalog-content\") pod \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.205704 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f969q\" (UniqueName: \"kubernetes.io/projected/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-kube-api-access-f969q\") pod \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\" (UID: \"ad4658a5-0307-4ef6-a3c1-f30dca3b6372\") " Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.206974 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-utilities" (OuterVolumeSpecName: "utilities") pod "ad4658a5-0307-4ef6-a3c1-f30dca3b6372" (UID: "ad4658a5-0307-4ef6-a3c1-f30dca3b6372"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.211428 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-kube-api-access-f969q" (OuterVolumeSpecName: "kube-api-access-f969q") pod "ad4658a5-0307-4ef6-a3c1-f30dca3b6372" (UID: "ad4658a5-0307-4ef6-a3c1-f30dca3b6372"). InnerVolumeSpecName "kube-api-access-f969q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.308649 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f969q\" (UniqueName: \"kubernetes.io/projected/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-kube-api-access-f969q\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.308901 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.308830 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad4658a5-0307-4ef6-a3c1-f30dca3b6372" (UID: "ad4658a5-0307-4ef6-a3c1-f30dca3b6372"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.410010 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4658a5-0307-4ef6-a3c1-f30dca3b6372-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.685216 4767 generic.go:334] "Generic (PLEG): container finished" podID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerID="3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95" exitCode=0 Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.685287 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvtfg" event={"ID":"ad4658a5-0307-4ef6-a3c1-f30dca3b6372","Type":"ContainerDied","Data":"3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95"} Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.685302 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvtfg" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.685329 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvtfg" event={"ID":"ad4658a5-0307-4ef6-a3c1-f30dca3b6372","Type":"ContainerDied","Data":"1791af8d529865ed25451d10d587032b9474511d4ad1f62b8274cee968c7be5b"} Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.685354 4767 scope.go:117] "RemoveContainer" containerID="3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.725921 4767 scope.go:117] "RemoveContainer" containerID="ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.752401 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvtfg"] Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.764393 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nvtfg"] Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.767278 4767 scope.go:117] "RemoveContainer" containerID="9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.810934 4767 scope.go:117] "RemoveContainer" containerID="3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95" Nov 24 22:24:45 crc kubenswrapper[4767]: E1124 22:24:45.811607 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95\": container with ID starting with 3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95 not found: ID does not exist" containerID="3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.811679 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95"} err="failed to get container status \"3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95\": rpc error: code = NotFound desc = could not find container \"3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95\": container with ID starting with 3c61e9071fd96ba80a4f4c0dfabedf88ed659f629c2b924457667a503e2e0b95 not found: ID does not exist" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.811704 4767 scope.go:117] "RemoveContainer" containerID="ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c" Nov 24 22:24:45 crc kubenswrapper[4767]: E1124 22:24:45.812098 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c\": container with ID starting with ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c not found: ID does not exist" containerID="ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.812137 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c"} err="failed to get container status \"ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c\": rpc error: code = NotFound desc = could not find container \"ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c\": container with ID starting with ed0552b73e3221a9f89255241dbf077ace332eda63c668d17fd73be40b7e635c not found: ID does not exist" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.812164 4767 scope.go:117] "RemoveContainer" containerID="9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b" Nov 24 22:24:45 crc kubenswrapper[4767]: E1124 22:24:45.812404 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b\": container with ID starting with 9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b not found: ID does not exist" containerID="9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b" Nov 24 22:24:45 crc kubenswrapper[4767]: I1124 22:24:45.812427 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b"} err="failed to get container status \"9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b\": rpc error: code = NotFound desc = could not find container \"9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b\": container with ID starting with 9f430957c97517cfbe062c18a93cf445369740789903e6d8ac65f92d1d6a996b not found: ID does not exist" Nov 24 22:24:46 crc kubenswrapper[4767]: I1124 22:24:46.330958 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" path="/var/lib/kubelet/pods/ad4658a5-0307-4ef6-a3c1-f30dca3b6372/volumes" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.935873 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 22:24:50 crc kubenswrapper[4767]: E1124 22:24:50.937154 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="extract-content" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.937171 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="extract-content" Nov 24 22:24:50 crc kubenswrapper[4767]: E1124 22:24:50.937191 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="registry-server" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.937197 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="registry-server" Nov 24 22:24:50 crc kubenswrapper[4767]: E1124 22:24:50.937221 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="extract-utilities" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.937294 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="extract-utilities" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.937594 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4658a5-0307-4ef6-a3c1-f30dca3b6372" containerName="registry-server" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.938979 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.941893 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.942176 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pcsqv" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.943060 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.943918 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 22:24:50 crc kubenswrapper[4767]: I1124 22:24:50.946064 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.040335 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.040382 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.040413 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.040434 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.040552 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.040907 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.040963 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-config-data\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.041139 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.041264 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spcjl\" (UniqueName: \"kubernetes.io/projected/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-kube-api-access-spcjl\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.142755 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.142838 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.142871 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.142994 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.143053 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-config-data\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.143222 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.143362 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spcjl\" (UniqueName: \"kubernetes.io/projected/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-kube-api-access-spcjl\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.143411 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.143454 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.143555 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.144078 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.144234 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.146057 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.148142 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-config-data\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.159875 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.160625 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.161327 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.165318 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spcjl\" (UniqueName: \"kubernetes.io/projected/91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3-kube-api-access-spcjl\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.185958 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3\") " pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.261884 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.699054 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 22:24:51 crc kubenswrapper[4767]: I1124 22:24:51.756500 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3","Type":"ContainerStarted","Data":"964e077ace1fbf8b07f51c10bbb812f7f4857c60892a1674dc7684404d11a15e"} Nov 24 22:25:01 crc kubenswrapper[4767]: I1124 22:25:01.861793 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3","Type":"ContainerStarted","Data":"9b846d1f21d87b48ac5f3a7d428961725704c327a6ca9ae9d3094876762fe273"} Nov 24 22:25:01 crc kubenswrapper[4767]: I1124 22:25:01.884975 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.649493632 podStartE2EDuration="12.884950313s" podCreationTimestamp="2025-11-24 22:24:49 +0000 UTC" firstStartedPulling="2025-11-24 22:24:51.708241459 +0000 UTC m=+2774.625224841" lastFinishedPulling="2025-11-24 22:25:00.94369815 +0000 UTC m=+2783.860681522" observedRunningTime="2025-11-24 22:25:01.880577338 +0000 UTC m=+2784.797560730" watchObservedRunningTime="2025-11-24 22:25:01.884950313 +0000 UTC m=+2784.801933705" Nov 24 22:25:35 crc kubenswrapper[4767]: I1124 22:25:35.481337 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:25:35 crc kubenswrapper[4767]: I1124 22:25:35.481844 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:26:05 crc kubenswrapper[4767]: I1124 22:26:05.481520 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:26:05 crc kubenswrapper[4767]: I1124 22:26:05.482317 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:26:35 crc kubenswrapper[4767]: I1124 22:26:35.481807 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:26:35 crc kubenswrapper[4767]: I1124 22:26:35.482569 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:26:35 crc kubenswrapper[4767]: I1124 22:26:35.482629 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:26:35 crc kubenswrapper[4767]: I1124 22:26:35.483765 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:26:35 crc kubenswrapper[4767]: I1124 22:26:35.483866 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" gracePeriod=600 Nov 24 22:26:35 crc kubenswrapper[4767]: E1124 22:26:35.619303 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:26:36 crc kubenswrapper[4767]: I1124 22:26:36.037495 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" exitCode=0 Nov 24 22:26:36 crc kubenswrapper[4767]: I1124 22:26:36.037558 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7"} Nov 24 22:26:36 crc kubenswrapper[4767]: I1124 22:26:36.037613 4767 scope.go:117] "RemoveContainer" containerID="dad6703894db8aa762877a6ddc6ca95faefcc00fa6123271efb16deedb6c3b70" Nov 24 22:26:36 crc kubenswrapper[4767]: I1124 22:26:36.039640 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:26:36 crc kubenswrapper[4767]: E1124 22:26:36.040367 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:26:50 crc kubenswrapper[4767]: I1124 22:26:50.313775 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:26:50 crc kubenswrapper[4767]: E1124 22:26:50.315226 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:27:04 crc kubenswrapper[4767]: I1124 22:27:04.313872 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:27:04 crc kubenswrapper[4767]: E1124 22:27:04.314622 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:27:19 crc kubenswrapper[4767]: I1124 22:27:19.314126 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:27:19 crc kubenswrapper[4767]: E1124 22:27:19.315633 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:27:33 crc kubenswrapper[4767]: I1124 22:27:33.313642 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:27:33 crc kubenswrapper[4767]: E1124 22:27:33.314573 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:27:48 crc kubenswrapper[4767]: I1124 22:27:48.322200 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:27:48 crc kubenswrapper[4767]: E1124 22:27:48.323068 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:27:53 crc kubenswrapper[4767]: I1124 22:27:53.804257 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xzn95"] Nov 24 22:27:53 crc kubenswrapper[4767]: I1124 22:27:53.810727 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:53 crc kubenswrapper[4767]: I1124 22:27:53.819049 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzn95"] Nov 24 22:27:53 crc kubenswrapper[4767]: I1124 22:27:53.907128 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlph\" (UniqueName: \"kubernetes.io/projected/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-kube-api-access-xrlph\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:53 crc kubenswrapper[4767]: I1124 22:27:53.907606 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-catalog-content\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:53 crc kubenswrapper[4767]: I1124 22:27:53.907810 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-utilities\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.010353 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlph\" (UniqueName: \"kubernetes.io/projected/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-kube-api-access-xrlph\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.010508 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-catalog-content\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.010534 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-utilities\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.011174 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-catalog-content\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.011351 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-utilities\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.035095 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlph\" (UniqueName: \"kubernetes.io/projected/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-kube-api-access-xrlph\") pod \"community-operators-xzn95\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.168010 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.704433 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzn95"] Nov 24 22:27:54 crc kubenswrapper[4767]: W1124 22:27:54.707374 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ecd5bd8_3872_472f_8d0e_c58cd46e11d7.slice/crio-867f2a671869061af921675cc3171629ead12c9fb223b9739562bf48d0f1aa90 WatchSource:0}: Error finding container 867f2a671869061af921675cc3171629ead12c9fb223b9739562bf48d0f1aa90: Status 404 returned error can't find the container with id 867f2a671869061af921675cc3171629ead12c9fb223b9739562bf48d0f1aa90 Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.960825 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerStarted","Data":"610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229"} Nov 24 22:27:54 crc kubenswrapper[4767]: I1124 22:27:54.960869 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerStarted","Data":"867f2a671869061af921675cc3171629ead12c9fb223b9739562bf48d0f1aa90"} Nov 24 22:27:55 crc kubenswrapper[4767]: I1124 22:27:55.974048 4767 generic.go:334] "Generic (PLEG): container finished" podID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerID="610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229" exitCode=0 Nov 24 22:27:55 crc kubenswrapper[4767]: I1124 22:27:55.974100 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerDied","Data":"610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229"} Nov 24 22:27:56 crc kubenswrapper[4767]: I1124 22:27:56.990990 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerStarted","Data":"bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004"} Nov 24 22:27:59 crc kubenswrapper[4767]: I1124 22:27:59.041985 4767 generic.go:334] "Generic (PLEG): container finished" podID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerID="bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004" exitCode=0 Nov 24 22:27:59 crc kubenswrapper[4767]: I1124 22:27:59.042100 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerDied","Data":"bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004"} Nov 24 22:28:00 crc kubenswrapper[4767]: I1124 22:28:00.059017 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerStarted","Data":"c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6"} Nov 24 22:28:00 crc kubenswrapper[4767]: I1124 22:28:00.080920 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xzn95" podStartSLOduration=3.592186251 podStartE2EDuration="7.080900905s" podCreationTimestamp="2025-11-24 22:27:53 +0000 UTC" firstStartedPulling="2025-11-24 22:27:55.976528212 +0000 UTC m=+2958.893511624" lastFinishedPulling="2025-11-24 22:27:59.465242866 +0000 UTC m=+2962.382226278" observedRunningTime="2025-11-24 22:28:00.079770003 +0000 UTC m=+2962.996753415" watchObservedRunningTime="2025-11-24 22:28:00.080900905 +0000 UTC m=+2962.997884277" Nov 24 22:28:02 crc kubenswrapper[4767]: I1124 22:28:02.313863 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:28:02 crc kubenswrapper[4767]: E1124 22:28:02.315920 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:28:04 crc kubenswrapper[4767]: I1124 22:28:04.168497 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:28:04 crc kubenswrapper[4767]: I1124 22:28:04.168949 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:28:04 crc kubenswrapper[4767]: I1124 22:28:04.235845 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:28:05 crc kubenswrapper[4767]: I1124 22:28:05.175935 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:28:05 crc kubenswrapper[4767]: I1124 22:28:05.242893 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzn95"] Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.129670 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xzn95" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="registry-server" containerID="cri-o://c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6" gracePeriod=2 Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.624498 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.738048 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrlph\" (UniqueName: \"kubernetes.io/projected/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-kube-api-access-xrlph\") pod \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.738244 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-utilities\") pod \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.738475 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-catalog-content\") pod \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\" (UID: \"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7\") " Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.739218 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-utilities" (OuterVolumeSpecName: "utilities") pod "9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" (UID: "9ecd5bd8-3872-472f-8d0e-c58cd46e11d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.745759 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-kube-api-access-xrlph" (OuterVolumeSpecName: "kube-api-access-xrlph") pod "9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" (UID: "9ecd5bd8-3872-472f-8d0e-c58cd46e11d7"). InnerVolumeSpecName "kube-api-access-xrlph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.791717 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" (UID: "9ecd5bd8-3872-472f-8d0e-c58cd46e11d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.840923 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.840963 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrlph\" (UniqueName: \"kubernetes.io/projected/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-kube-api-access-xrlph\") on node \"crc\" DevicePath \"\"" Nov 24 22:28:07 crc kubenswrapper[4767]: I1124 22:28:07.840978 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.141880 4767 generic.go:334] "Generic (PLEG): container finished" podID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerID="c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6" exitCode=0 Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.141929 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerDied","Data":"c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6"} Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.141971 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzn95" event={"ID":"9ecd5bd8-3872-472f-8d0e-c58cd46e11d7","Type":"ContainerDied","Data":"867f2a671869061af921675cc3171629ead12c9fb223b9739562bf48d0f1aa90"} Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.141992 4767 scope.go:117] "RemoveContainer" containerID="c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.142030 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzn95" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.178408 4767 scope.go:117] "RemoveContainer" containerID="bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.191183 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzn95"] Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.209215 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xzn95"] Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.214221 4767 scope.go:117] "RemoveContainer" containerID="610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.282691 4767 scope.go:117] "RemoveContainer" containerID="c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6" Nov 24 22:28:08 crc kubenswrapper[4767]: E1124 22:28:08.283390 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6\": container with ID starting with c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6 not found: ID does not exist" containerID="c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.283433 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6"} err="failed to get container status \"c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6\": rpc error: code = NotFound desc = could not find container \"c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6\": container with ID starting with c4cfec318edbf5a70305778aa2a219cc93b0227137c4450ff8159e001696d2b6 not found: ID does not exist" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.283460 4767 scope.go:117] "RemoveContainer" containerID="bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004" Nov 24 22:28:08 crc kubenswrapper[4767]: E1124 22:28:08.284085 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004\": container with ID starting with bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004 not found: ID does not exist" containerID="bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.284157 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004"} err="failed to get container status \"bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004\": rpc error: code = NotFound desc = could not find container \"bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004\": container with ID starting with bafa8411269ed85d43a5de469d3a475a0e0882145506aa41d4632fcb16a0a004 not found: ID does not exist" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.284204 4767 scope.go:117] "RemoveContainer" containerID="610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229" Nov 24 22:28:08 crc kubenswrapper[4767]: E1124 22:28:08.284638 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229\": container with ID starting with 610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229 not found: ID does not exist" containerID="610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.284687 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229"} err="failed to get container status \"610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229\": rpc error: code = NotFound desc = could not find container \"610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229\": container with ID starting with 610d29b68b06738c6a78110ef7aa9498772d67b4281bca752d745723139ac229 not found: ID does not exist" Nov 24 22:28:08 crc kubenswrapper[4767]: I1124 22:28:08.329742 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" path="/var/lib/kubelet/pods/9ecd5bd8-3872-472f-8d0e-c58cd46e11d7/volumes" Nov 24 22:28:15 crc kubenswrapper[4767]: I1124 22:28:15.313236 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:28:15 crc kubenswrapper[4767]: E1124 22:28:15.314249 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:28:28 crc kubenswrapper[4767]: I1124 22:28:28.313653 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:28:28 crc kubenswrapper[4767]: E1124 22:28:28.315084 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:28:39 crc kubenswrapper[4767]: I1124 22:28:39.313943 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:28:39 crc kubenswrapper[4767]: E1124 22:28:39.315220 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:28:50 crc kubenswrapper[4767]: I1124 22:28:50.314536 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:28:50 crc kubenswrapper[4767]: E1124 22:28:50.315577 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:29:03 crc kubenswrapper[4767]: I1124 22:29:03.314441 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:29:03 crc kubenswrapper[4767]: E1124 22:29:03.321317 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:29:17 crc kubenswrapper[4767]: I1124 22:29:17.313723 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:29:17 crc kubenswrapper[4767]: E1124 22:29:17.314903 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:29:28 crc kubenswrapper[4767]: I1124 22:29:28.325169 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:29:28 crc kubenswrapper[4767]: E1124 22:29:28.326160 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:29:40 crc kubenswrapper[4767]: I1124 22:29:40.313215 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:29:40 crc kubenswrapper[4767]: E1124 22:29:40.314116 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:29:54 crc kubenswrapper[4767]: I1124 22:29:54.313597 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:29:54 crc kubenswrapper[4767]: E1124 22:29:54.314905 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.202814 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df"] Nov 24 22:30:00 crc kubenswrapper[4767]: E1124 22:30:00.204166 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.204197 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4767]: E1124 22:30:00.204234 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.204248 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4767]: E1124 22:30:00.204317 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.204331 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.204797 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecd5bd8-3872-472f-8d0e-c58cd46e11d7" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.205959 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.208864 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.208940 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.243184 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df"] Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.305518 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhhf4\" (UniqueName: \"kubernetes.io/projected/6aef0d5c-c571-45ad-80ca-21ca33e380cb-kube-api-access-fhhf4\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.305578 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aef0d5c-c571-45ad-80ca-21ca33e380cb-config-volume\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.305835 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aef0d5c-c571-45ad-80ca-21ca33e380cb-secret-volume\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.409033 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhhf4\" (UniqueName: \"kubernetes.io/projected/6aef0d5c-c571-45ad-80ca-21ca33e380cb-kube-api-access-fhhf4\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.409484 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aef0d5c-c571-45ad-80ca-21ca33e380cb-config-volume\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.409626 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aef0d5c-c571-45ad-80ca-21ca33e380cb-secret-volume\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.413406 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aef0d5c-c571-45ad-80ca-21ca33e380cb-config-volume\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.436631 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aef0d5c-c571-45ad-80ca-21ca33e380cb-secret-volume\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.437240 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhhf4\" (UniqueName: \"kubernetes.io/projected/6aef0d5c-c571-45ad-80ca-21ca33e380cb-kube-api-access-fhhf4\") pod \"collect-profiles-29400390-jr9df\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:00 crc kubenswrapper[4767]: I1124 22:30:00.542600 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:01 crc kubenswrapper[4767]: I1124 22:30:01.027232 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df"] Nov 24 22:30:01 crc kubenswrapper[4767]: I1124 22:30:01.660407 4767 generic.go:334] "Generic (PLEG): container finished" podID="6aef0d5c-c571-45ad-80ca-21ca33e380cb" containerID="71840228ff7399671c094fc4ea3a0d64c8f471f67a4ec2bb0e5fe27b105e4157" exitCode=0 Nov 24 22:30:01 crc kubenswrapper[4767]: I1124 22:30:01.660459 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" event={"ID":"6aef0d5c-c571-45ad-80ca-21ca33e380cb","Type":"ContainerDied","Data":"71840228ff7399671c094fc4ea3a0d64c8f471f67a4ec2bb0e5fe27b105e4157"} Nov 24 22:30:01 crc kubenswrapper[4767]: I1124 22:30:01.660847 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" event={"ID":"6aef0d5c-c571-45ad-80ca-21ca33e380cb","Type":"ContainerStarted","Data":"ad4a94fc9e5f9903ea4068b60ad7bdd32192b60a23bdc38859025d205abbeb58"} Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.069011 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.160633 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aef0d5c-c571-45ad-80ca-21ca33e380cb-config-volume\") pod \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.161066 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aef0d5c-c571-45ad-80ca-21ca33e380cb-secret-volume\") pod \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.161118 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhhf4\" (UniqueName: \"kubernetes.io/projected/6aef0d5c-c571-45ad-80ca-21ca33e380cb-kube-api-access-fhhf4\") pod \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\" (UID: \"6aef0d5c-c571-45ad-80ca-21ca33e380cb\") " Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.162026 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aef0d5c-c571-45ad-80ca-21ca33e380cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "6aef0d5c-c571-45ad-80ca-21ca33e380cb" (UID: "6aef0d5c-c571-45ad-80ca-21ca33e380cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.177340 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aef0d5c-c571-45ad-80ca-21ca33e380cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6aef0d5c-c571-45ad-80ca-21ca33e380cb" (UID: "6aef0d5c-c571-45ad-80ca-21ca33e380cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.182277 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aef0d5c-c571-45ad-80ca-21ca33e380cb-kube-api-access-fhhf4" (OuterVolumeSpecName: "kube-api-access-fhhf4") pod "6aef0d5c-c571-45ad-80ca-21ca33e380cb" (UID: "6aef0d5c-c571-45ad-80ca-21ca33e380cb"). InnerVolumeSpecName "kube-api-access-fhhf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.263491 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aef0d5c-c571-45ad-80ca-21ca33e380cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.263524 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aef0d5c-c571-45ad-80ca-21ca33e380cb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.263534 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhhf4\" (UniqueName: \"kubernetes.io/projected/6aef0d5c-c571-45ad-80ca-21ca33e380cb-kube-api-access-fhhf4\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.683454 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" event={"ID":"6aef0d5c-c571-45ad-80ca-21ca33e380cb","Type":"ContainerDied","Data":"ad4a94fc9e5f9903ea4068b60ad7bdd32192b60a23bdc38859025d205abbeb58"} Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.683494 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4a94fc9e5f9903ea4068b60ad7bdd32192b60a23bdc38859025d205abbeb58" Nov 24 22:30:03 crc kubenswrapper[4767]: I1124 22:30:03.683539 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df" Nov 24 22:30:04 crc kubenswrapper[4767]: I1124 22:30:04.182248 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv"] Nov 24 22:30:04 crc kubenswrapper[4767]: I1124 22:30:04.199841 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-md6mv"] Nov 24 22:30:04 crc kubenswrapper[4767]: I1124 22:30:04.327950 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b98d4c-dd1c-49a7-97a5-fab5e138fefd" path="/var/lib/kubelet/pods/66b98d4c-dd1c-49a7-97a5-fab5e138fefd/volumes" Nov 24 22:30:09 crc kubenswrapper[4767]: I1124 22:30:09.313340 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:30:09 crc kubenswrapper[4767]: E1124 22:30:09.314082 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:30:20 crc kubenswrapper[4767]: I1124 22:30:20.313765 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:30:20 crc kubenswrapper[4767]: E1124 22:30:20.314537 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:30:31 crc kubenswrapper[4767]: I1124 22:30:31.314655 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:30:31 crc kubenswrapper[4767]: E1124 22:30:31.315715 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:30:43 crc kubenswrapper[4767]: I1124 22:30:43.854843 4767 scope.go:117] "RemoveContainer" containerID="8e78ca6e7a5d36bcd8fb07fd4e44ebbd484b67193ecd129ff945a7016e779faf" Nov 24 22:30:44 crc kubenswrapper[4767]: I1124 22:30:44.315141 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:30:44 crc kubenswrapper[4767]: E1124 22:30:44.315631 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.572878 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wgwmp"] Nov 24 22:30:51 crc kubenswrapper[4767]: E1124 22:30:51.573923 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef0d5c-c571-45ad-80ca-21ca33e380cb" containerName="collect-profiles" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.573938 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef0d5c-c571-45ad-80ca-21ca33e380cb" containerName="collect-profiles" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.574220 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aef0d5c-c571-45ad-80ca-21ca33e380cb" containerName="collect-profiles" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.582998 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.600166 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgwmp"] Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.725031 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvcc\" (UniqueName: \"kubernetes.io/projected/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-kube-api-access-8vvcc\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.726119 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-utilities\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.726232 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-catalog-content\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.827971 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-utilities\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.828098 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-catalog-content\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.828173 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvcc\" (UniqueName: \"kubernetes.io/projected/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-kube-api-access-8vvcc\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.828530 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-utilities\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.828648 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-catalog-content\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.852461 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvcc\" (UniqueName: \"kubernetes.io/projected/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-kube-api-access-8vvcc\") pod \"redhat-marketplace-wgwmp\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:51 crc kubenswrapper[4767]: I1124 22:30:51.907110 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:30:52 crc kubenswrapper[4767]: I1124 22:30:52.402814 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgwmp"] Nov 24 22:30:53 crc kubenswrapper[4767]: I1124 22:30:53.257858 4767 generic.go:334] "Generic (PLEG): container finished" podID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerID="9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6" exitCode=0 Nov 24 22:30:53 crc kubenswrapper[4767]: I1124 22:30:53.258389 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgwmp" event={"ID":"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7","Type":"ContainerDied","Data":"9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6"} Nov 24 22:30:53 crc kubenswrapper[4767]: I1124 22:30:53.258511 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgwmp" event={"ID":"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7","Type":"ContainerStarted","Data":"27842a5a17c0ed9fb06f701d9bf3cb55e3904a9e95d84a33eb22e763c88446a2"} Nov 24 22:30:53 crc kubenswrapper[4767]: I1124 22:30:53.263427 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:30:54 crc kubenswrapper[4767]: I1124 22:30:54.269073 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgwmp" event={"ID":"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7","Type":"ContainerStarted","Data":"8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e"} Nov 24 22:30:55 crc kubenswrapper[4767]: I1124 22:30:55.291150 4767 generic.go:334] "Generic (PLEG): container finished" podID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerID="8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e" exitCode=0 Nov 24 22:30:55 crc kubenswrapper[4767]: I1124 22:30:55.291233 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgwmp" event={"ID":"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7","Type":"ContainerDied","Data":"8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e"} Nov 24 22:30:56 crc kubenswrapper[4767]: I1124 22:30:56.306686 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgwmp" event={"ID":"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7","Type":"ContainerStarted","Data":"8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf"} Nov 24 22:30:56 crc kubenswrapper[4767]: I1124 22:30:56.314725 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:30:56 crc kubenswrapper[4767]: E1124 22:30:56.315250 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:30:56 crc kubenswrapper[4767]: I1124 22:30:56.345396 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wgwmp" podStartSLOduration=2.864030015 podStartE2EDuration="5.34464088s" podCreationTimestamp="2025-11-24 22:30:51 +0000 UTC" firstStartedPulling="2025-11-24 22:30:53.26297619 +0000 UTC m=+3136.179959602" lastFinishedPulling="2025-11-24 22:30:55.743587055 +0000 UTC m=+3138.660570467" observedRunningTime="2025-11-24 22:30:56.339900486 +0000 UTC m=+3139.256883868" watchObservedRunningTime="2025-11-24 22:30:56.34464088 +0000 UTC m=+3139.261624282" Nov 24 22:31:01 crc kubenswrapper[4767]: I1124 22:31:01.908187 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:31:01 crc kubenswrapper[4767]: I1124 22:31:01.909064 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:31:01 crc kubenswrapper[4767]: I1124 22:31:01.980093 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:31:02 crc kubenswrapper[4767]: I1124 22:31:02.435564 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:31:02 crc kubenswrapper[4767]: I1124 22:31:02.496370 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgwmp"] Nov 24 22:31:04 crc kubenswrapper[4767]: I1124 22:31:04.397986 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wgwmp" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="registry-server" containerID="cri-o://8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf" gracePeriod=2 Nov 24 22:31:04 crc kubenswrapper[4767]: I1124 22:31:04.946317 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.039788 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-utilities\") pod \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.039849 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-catalog-content\") pod \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.039992 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vvcc\" (UniqueName: \"kubernetes.io/projected/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-kube-api-access-8vvcc\") pod \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\" (UID: \"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7\") " Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.040662 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-utilities" (OuterVolumeSpecName: "utilities") pod "5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" (UID: "5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.047933 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-kube-api-access-8vvcc" (OuterVolumeSpecName: "kube-api-access-8vvcc") pod "5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" (UID: "5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7"). InnerVolumeSpecName "kube-api-access-8vvcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.063891 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" (UID: "5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.142721 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vvcc\" (UniqueName: \"kubernetes.io/projected/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-kube-api-access-8vvcc\") on node \"crc\" DevicePath \"\"" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.142750 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.142759 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.410932 4767 generic.go:334] "Generic (PLEG): container finished" podID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerID="8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf" exitCode=0 Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.411025 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgwmp" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.411067 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgwmp" event={"ID":"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7","Type":"ContainerDied","Data":"8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf"} Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.411600 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgwmp" event={"ID":"5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7","Type":"ContainerDied","Data":"27842a5a17c0ed9fb06f701d9bf3cb55e3904a9e95d84a33eb22e763c88446a2"} Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.411640 4767 scope.go:117] "RemoveContainer" containerID="8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.444147 4767 scope.go:117] "RemoveContainer" containerID="8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.479339 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgwmp"] Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.492069 4767 scope.go:117] "RemoveContainer" containerID="9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.495314 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgwmp"] Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.528446 4767 scope.go:117] "RemoveContainer" containerID="8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf" Nov 24 22:31:05 crc kubenswrapper[4767]: E1124 22:31:05.528938 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf\": container with ID starting with 8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf not found: ID does not exist" containerID="8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.529007 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf"} err="failed to get container status \"8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf\": rpc error: code = NotFound desc = could not find container \"8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf\": container with ID starting with 8b3787171a8506d976c170bb2242c6a17ce59e292478422c23429083a9ccfcbf not found: ID does not exist" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.529042 4767 scope.go:117] "RemoveContainer" containerID="8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e" Nov 24 22:31:05 crc kubenswrapper[4767]: E1124 22:31:05.529426 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e\": container with ID starting with 8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e not found: ID does not exist" containerID="8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.529469 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e"} err="failed to get container status \"8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e\": rpc error: code = NotFound desc = could not find container \"8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e\": container with ID starting with 8fc03a0479565f60764bcda6ba147720fa0128c7dcc86e1a0300894060f6796e not found: ID does not exist" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.529507 4767 scope.go:117] "RemoveContainer" containerID="9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6" Nov 24 22:31:05 crc kubenswrapper[4767]: E1124 22:31:05.529742 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6\": container with ID starting with 9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6 not found: ID does not exist" containerID="9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6" Nov 24 22:31:05 crc kubenswrapper[4767]: I1124 22:31:05.529799 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6"} err="failed to get container status \"9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6\": rpc error: code = NotFound desc = could not find container \"9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6\": container with ID starting with 9139fe5d4de6659c80e8557e445a7f38df66bd534be6cb94e3df238d18e901e6 not found: ID does not exist" Nov 24 22:31:06 crc kubenswrapper[4767]: I1124 22:31:06.330800 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" path="/var/lib/kubelet/pods/5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7/volumes" Nov 24 22:31:11 crc kubenswrapper[4767]: I1124 22:31:11.314145 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:31:11 crc kubenswrapper[4767]: E1124 22:31:11.314893 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:31:23 crc kubenswrapper[4767]: I1124 22:31:23.313808 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:31:23 crc kubenswrapper[4767]: E1124 22:31:23.314611 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:31:36 crc kubenswrapper[4767]: I1124 22:31:36.314240 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:31:36 crc kubenswrapper[4767]: I1124 22:31:36.750625 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"92645578fecb8c9e494395be1fe3d3037ad4ba9382efce850976d1579e6640b8"} Nov 24 22:34:05 crc kubenswrapper[4767]: I1124 22:34:05.482185 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:34:05 crc kubenswrapper[4767]: I1124 22:34:05.482912 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:34:35 crc kubenswrapper[4767]: I1124 22:34:35.481665 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:34:35 crc kubenswrapper[4767]: I1124 22:34:35.482375 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.306579 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tch7x"] Nov 24 22:34:52 crc kubenswrapper[4767]: E1124 22:34:52.307951 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="extract-utilities" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.307967 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="extract-utilities" Nov 24 22:34:52 crc kubenswrapper[4767]: E1124 22:34:52.308001 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="registry-server" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.308009 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="registry-server" Nov 24 22:34:52 crc kubenswrapper[4767]: E1124 22:34:52.308044 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="extract-content" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.308052 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="extract-content" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.308259 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d57b25b-3f89-4c51-ae87-9c5d1b0a3df7" containerName="registry-server" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.309700 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.346068 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tch7x"] Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.372463 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn8vp\" (UniqueName: \"kubernetes.io/projected/cf981511-6313-4684-a2b9-adc784165a65-kube-api-access-zn8vp\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.372901 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-utilities\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.373262 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.474713 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.475037 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn8vp\" (UniqueName: \"kubernetes.io/projected/cf981511-6313-4684-a2b9-adc784165a65-kube-api-access-zn8vp\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.475190 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-utilities\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.475290 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.475749 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-utilities\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.498484 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn8vp\" (UniqueName: \"kubernetes.io/projected/cf981511-6313-4684-a2b9-adc784165a65-kube-api-access-zn8vp\") pod \"redhat-operators-tch7x\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:52 crc kubenswrapper[4767]: I1124 22:34:52.640426 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:34:53 crc kubenswrapper[4767]: I1124 22:34:53.120469 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tch7x"] Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.013713 4767 generic.go:334] "Generic (PLEG): container finished" podID="cf981511-6313-4684-a2b9-adc784165a65" containerID="63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14" exitCode=0 Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.013840 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tch7x" event={"ID":"cf981511-6313-4684-a2b9-adc784165a65","Type":"ContainerDied","Data":"63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14"} Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.014236 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tch7x" event={"ID":"cf981511-6313-4684-a2b9-adc784165a65","Type":"ContainerStarted","Data":"1662b695d7b9d560081bff5a4d1c06135a24d3076e554c83a62929b16d51daed"} Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.686969 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4zx8"] Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.689621 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.706907 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4zx8"] Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.832644 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-utilities\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.832717 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tmjh\" (UniqueName: \"kubernetes.io/projected/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-kube-api-access-7tmjh\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.832851 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-catalog-content\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.934379 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-utilities\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.934490 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tmjh\" (UniqueName: \"kubernetes.io/projected/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-kube-api-access-7tmjh\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.934647 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-catalog-content\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.934973 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-catalog-content\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.935170 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-utilities\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:54 crc kubenswrapper[4767]: I1124 22:34:54.961200 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tmjh\" (UniqueName: \"kubernetes.io/projected/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-kube-api-access-7tmjh\") pod \"certified-operators-h4zx8\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:55 crc kubenswrapper[4767]: I1124 22:34:55.024183 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tch7x" event={"ID":"cf981511-6313-4684-a2b9-adc784165a65","Type":"ContainerStarted","Data":"89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30"} Nov 24 22:34:55 crc kubenswrapper[4767]: I1124 22:34:55.042706 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:34:55 crc kubenswrapper[4767]: I1124 22:34:55.609134 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4zx8"] Nov 24 22:34:55 crc kubenswrapper[4767]: W1124 22:34:55.616386 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc1a97cc_7db4_4c18_98bc_92f2d5e75030.slice/crio-9ee55f55c07369d4f529d5063eaceb8266f657f260fd163c74ec223ab5e52830 WatchSource:0}: Error finding container 9ee55f55c07369d4f529d5063eaceb8266f657f260fd163c74ec223ab5e52830: Status 404 returned error can't find the container with id 9ee55f55c07369d4f529d5063eaceb8266f657f260fd163c74ec223ab5e52830 Nov 24 22:34:56 crc kubenswrapper[4767]: I1124 22:34:56.038363 4767 generic.go:334] "Generic (PLEG): container finished" podID="cf981511-6313-4684-a2b9-adc784165a65" containerID="89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30" exitCode=0 Nov 24 22:34:56 crc kubenswrapper[4767]: I1124 22:34:56.038449 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tch7x" event={"ID":"cf981511-6313-4684-a2b9-adc784165a65","Type":"ContainerDied","Data":"89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30"} Nov 24 22:34:56 crc kubenswrapper[4767]: I1124 22:34:56.042090 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerStarted","Data":"59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9"} Nov 24 22:34:56 crc kubenswrapper[4767]: I1124 22:34:56.042134 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerStarted","Data":"9ee55f55c07369d4f529d5063eaceb8266f657f260fd163c74ec223ab5e52830"} Nov 24 22:34:57 crc kubenswrapper[4767]: I1124 22:34:57.058517 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tch7x" event={"ID":"cf981511-6313-4684-a2b9-adc784165a65","Type":"ContainerStarted","Data":"0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1"} Nov 24 22:34:57 crc kubenswrapper[4767]: I1124 22:34:57.062656 4767 generic.go:334] "Generic (PLEG): container finished" podID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerID="59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9" exitCode=0 Nov 24 22:34:57 crc kubenswrapper[4767]: I1124 22:34:57.062742 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerDied","Data":"59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9"} Nov 24 22:34:57 crc kubenswrapper[4767]: I1124 22:34:57.088574 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tch7x" podStartSLOduration=2.605652561 podStartE2EDuration="5.088549902s" podCreationTimestamp="2025-11-24 22:34:52 +0000 UTC" firstStartedPulling="2025-11-24 22:34:54.015894047 +0000 UTC m=+3376.932877419" lastFinishedPulling="2025-11-24 22:34:56.498791358 +0000 UTC m=+3379.415774760" observedRunningTime="2025-11-24 22:34:57.08318215 +0000 UTC m=+3380.000165552" watchObservedRunningTime="2025-11-24 22:34:57.088549902 +0000 UTC m=+3380.005533284" Nov 24 22:34:58 crc kubenswrapper[4767]: I1124 22:34:58.074663 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerStarted","Data":"88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529"} Nov 24 22:34:59 crc kubenswrapper[4767]: I1124 22:34:59.085729 4767 generic.go:334] "Generic (PLEG): container finished" podID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerID="88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529" exitCode=0 Nov 24 22:34:59 crc kubenswrapper[4767]: I1124 22:34:59.085819 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerDied","Data":"88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529"} Nov 24 22:34:59 crc kubenswrapper[4767]: E1124 22:34:59.149530 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc1a97cc_7db4_4c18_98bc_92f2d5e75030.slice/crio-88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc1a97cc_7db4_4c18_98bc_92f2d5e75030.slice/crio-conmon-88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:35:00 crc kubenswrapper[4767]: I1124 22:35:00.096688 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerStarted","Data":"8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682"} Nov 24 22:35:00 crc kubenswrapper[4767]: I1124 22:35:00.118637 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4zx8" podStartSLOduration=3.696460033 podStartE2EDuration="6.11861737s" podCreationTimestamp="2025-11-24 22:34:54 +0000 UTC" firstStartedPulling="2025-11-24 22:34:57.066159878 +0000 UTC m=+3379.983143270" lastFinishedPulling="2025-11-24 22:34:59.488317235 +0000 UTC m=+3382.405300607" observedRunningTime="2025-11-24 22:35:00.113484214 +0000 UTC m=+3383.030467626" watchObservedRunningTime="2025-11-24 22:35:00.11861737 +0000 UTC m=+3383.035600742" Nov 24 22:35:02 crc kubenswrapper[4767]: I1124 22:35:02.640704 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:35:02 crc kubenswrapper[4767]: I1124 22:35:02.641047 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:35:03 crc kubenswrapper[4767]: I1124 22:35:03.694056 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tch7x" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="registry-server" probeResult="failure" output=< Nov 24 22:35:03 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 22:35:03 crc kubenswrapper[4767]: > Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.043619 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.043730 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.123930 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.213601 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.377433 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4zx8"] Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.481060 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.481132 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.481187 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.482007 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92645578fecb8c9e494395be1fe3d3037ad4ba9382efce850976d1579e6640b8"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:35:05 crc kubenswrapper[4767]: I1124 22:35:05.482070 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://92645578fecb8c9e494395be1fe3d3037ad4ba9382efce850976d1579e6640b8" gracePeriod=600 Nov 24 22:35:06 crc kubenswrapper[4767]: I1124 22:35:06.154572 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="92645578fecb8c9e494395be1fe3d3037ad4ba9382efce850976d1579e6640b8" exitCode=0 Nov 24 22:35:06 crc kubenswrapper[4767]: I1124 22:35:06.154773 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"92645578fecb8c9e494395be1fe3d3037ad4ba9382efce850976d1579e6640b8"} Nov 24 22:35:06 crc kubenswrapper[4767]: I1124 22:35:06.155212 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795"} Nov 24 22:35:06 crc kubenswrapper[4767]: I1124 22:35:06.155243 4767 scope.go:117] "RemoveContainer" containerID="25a3fa0ee6fa0902f6e96557794947729612ec6299b202c5acb43a46242b9fe7" Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.167205 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4zx8" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="registry-server" containerID="cri-o://8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682" gracePeriod=2 Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.700432 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.812559 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tmjh\" (UniqueName: \"kubernetes.io/projected/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-kube-api-access-7tmjh\") pod \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.812719 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-catalog-content\") pod \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.812821 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-utilities\") pod \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\" (UID: \"bc1a97cc-7db4-4c18-98bc-92f2d5e75030\") " Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.813588 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-utilities" (OuterVolumeSpecName: "utilities") pod "bc1a97cc-7db4-4c18-98bc-92f2d5e75030" (UID: "bc1a97cc-7db4-4c18-98bc-92f2d5e75030"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.821303 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-kube-api-access-7tmjh" (OuterVolumeSpecName: "kube-api-access-7tmjh") pod "bc1a97cc-7db4-4c18-98bc-92f2d5e75030" (UID: "bc1a97cc-7db4-4c18-98bc-92f2d5e75030"). InnerVolumeSpecName "kube-api-access-7tmjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.860559 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc1a97cc-7db4-4c18-98bc-92f2d5e75030" (UID: "bc1a97cc-7db4-4c18-98bc-92f2d5e75030"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.915446 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tmjh\" (UniqueName: \"kubernetes.io/projected/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-kube-api-access-7tmjh\") on node \"crc\" DevicePath \"\"" Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.915484 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:35:07 crc kubenswrapper[4767]: I1124 22:35:07.915494 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc1a97cc-7db4-4c18-98bc-92f2d5e75030-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.187431 4767 generic.go:334] "Generic (PLEG): container finished" podID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerID="8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682" exitCode=0 Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.187481 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerDied","Data":"8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682"} Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.187515 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zx8" event={"ID":"bc1a97cc-7db4-4c18-98bc-92f2d5e75030","Type":"ContainerDied","Data":"9ee55f55c07369d4f529d5063eaceb8266f657f260fd163c74ec223ab5e52830"} Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.187539 4767 scope.go:117] "RemoveContainer" containerID="8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.187533 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zx8" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.222390 4767 scope.go:117] "RemoveContainer" containerID="88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.230885 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4zx8"] Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.243671 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4zx8"] Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.255663 4767 scope.go:117] "RemoveContainer" containerID="59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.295995 4767 scope.go:117] "RemoveContainer" containerID="8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682" Nov 24 22:35:08 crc kubenswrapper[4767]: E1124 22:35:08.296990 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682\": container with ID starting with 8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682 not found: ID does not exist" containerID="8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.297021 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682"} err="failed to get container status \"8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682\": rpc error: code = NotFound desc = could not find container \"8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682\": container with ID starting with 8321928f4d4952156e971392c9c6756acf5f6ed65db9f75daa59060749c2c682 not found: ID does not exist" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.297044 4767 scope.go:117] "RemoveContainer" containerID="88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529" Nov 24 22:35:08 crc kubenswrapper[4767]: E1124 22:35:08.297559 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529\": container with ID starting with 88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529 not found: ID does not exist" containerID="88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.297613 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529"} err="failed to get container status \"88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529\": rpc error: code = NotFound desc = could not find container \"88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529\": container with ID starting with 88fc9dbf2e55a72b87dba0007f4a85cb2a0ac6ef2b6ed4d6a0b9079b5f273529 not found: ID does not exist" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.297646 4767 scope.go:117] "RemoveContainer" containerID="59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9" Nov 24 22:35:08 crc kubenswrapper[4767]: E1124 22:35:08.298029 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9\": container with ID starting with 59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9 not found: ID does not exist" containerID="59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.298078 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9"} err="failed to get container status \"59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9\": rpc error: code = NotFound desc = could not find container \"59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9\": container with ID starting with 59d5d583d29b84a3860e9f067fe987aeb067bdf665aad85ee7a64823177da4f9 not found: ID does not exist" Nov 24 22:35:08 crc kubenswrapper[4767]: I1124 22:35:08.329104 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" path="/var/lib/kubelet/pods/bc1a97cc-7db4-4c18-98bc-92f2d5e75030/volumes" Nov 24 22:35:13 crc kubenswrapper[4767]: I1124 22:35:13.710629 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tch7x" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="registry-server" probeResult="failure" output=< Nov 24 22:35:13 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 22:35:13 crc kubenswrapper[4767]: > Nov 24 22:35:22 crc kubenswrapper[4767]: I1124 22:35:22.734136 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:35:22 crc kubenswrapper[4767]: I1124 22:35:22.798870 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.372357 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tch7x"] Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.373042 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tch7x" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="registry-server" containerID="cri-o://0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1" gracePeriod=2 Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.880141 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.987403 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content\") pod \"cf981511-6313-4684-a2b9-adc784165a65\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.987472 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn8vp\" (UniqueName: \"kubernetes.io/projected/cf981511-6313-4684-a2b9-adc784165a65-kube-api-access-zn8vp\") pod \"cf981511-6313-4684-a2b9-adc784165a65\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.987589 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-utilities\") pod \"cf981511-6313-4684-a2b9-adc784165a65\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.988532 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-utilities" (OuterVolumeSpecName: "utilities") pod "cf981511-6313-4684-a2b9-adc784165a65" (UID: "cf981511-6313-4684-a2b9-adc784165a65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:35:25 crc kubenswrapper[4767]: I1124 22:35:25.994161 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf981511-6313-4684-a2b9-adc784165a65-kube-api-access-zn8vp" (OuterVolumeSpecName: "kube-api-access-zn8vp") pod "cf981511-6313-4684-a2b9-adc784165a65" (UID: "cf981511-6313-4684-a2b9-adc784165a65"). InnerVolumeSpecName "kube-api-access-zn8vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.088457 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf981511-6313-4684-a2b9-adc784165a65" (UID: "cf981511-6313-4684-a2b9-adc784165a65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.089056 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content\") pod \"cf981511-6313-4684-a2b9-adc784165a65\" (UID: \"cf981511-6313-4684-a2b9-adc784165a65\") " Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.089493 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn8vp\" (UniqueName: \"kubernetes.io/projected/cf981511-6313-4684-a2b9-adc784165a65-kube-api-access-zn8vp\") on node \"crc\" DevicePath \"\"" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.089508 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:35:26 crc kubenswrapper[4767]: W1124 22:35:26.089561 4767 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cf981511-6313-4684-a2b9-adc784165a65/volumes/kubernetes.io~empty-dir/catalog-content Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.089572 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf981511-6313-4684-a2b9-adc784165a65" (UID: "cf981511-6313-4684-a2b9-adc784165a65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.191948 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf981511-6313-4684-a2b9-adc784165a65-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.386488 4767 generic.go:334] "Generic (PLEG): container finished" podID="cf981511-6313-4684-a2b9-adc784165a65" containerID="0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1" exitCode=0 Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.386597 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tch7x" event={"ID":"cf981511-6313-4684-a2b9-adc784165a65","Type":"ContainerDied","Data":"0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1"} Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.386646 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tch7x" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.386707 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tch7x" event={"ID":"cf981511-6313-4684-a2b9-adc784165a65","Type":"ContainerDied","Data":"1662b695d7b9d560081bff5a4d1c06135a24d3076e554c83a62929b16d51daed"} Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.386763 4767 scope.go:117] "RemoveContainer" containerID="0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.421330 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tch7x"] Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.426379 4767 scope.go:117] "RemoveContainer" containerID="89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.440136 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tch7x"] Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.454696 4767 scope.go:117] "RemoveContainer" containerID="63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.517672 4767 scope.go:117] "RemoveContainer" containerID="0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1" Nov 24 22:35:26 crc kubenswrapper[4767]: E1124 22:35:26.518296 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1\": container with ID starting with 0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1 not found: ID does not exist" containerID="0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.518405 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1"} err="failed to get container status \"0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1\": rpc error: code = NotFound desc = could not find container \"0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1\": container with ID starting with 0d4c5332293d0661a959fc1dac665e80a14b474adbcc7889c764baf1f04ea7e1 not found: ID does not exist" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.518537 4767 scope.go:117] "RemoveContainer" containerID="89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30" Nov 24 22:35:26 crc kubenswrapper[4767]: E1124 22:35:26.519023 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30\": container with ID starting with 89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30 not found: ID does not exist" containerID="89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.519201 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30"} err="failed to get container status \"89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30\": rpc error: code = NotFound desc = could not find container \"89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30\": container with ID starting with 89c785d32113d91337cd2e0e8cabf0a614213d07bcdbb665eec9e1ee6c791b30 not found: ID does not exist" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.519309 4767 scope.go:117] "RemoveContainer" containerID="63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14" Nov 24 22:35:26 crc kubenswrapper[4767]: E1124 22:35:26.519785 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14\": container with ID starting with 63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14 not found: ID does not exist" containerID="63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14" Nov 24 22:35:26 crc kubenswrapper[4767]: I1124 22:35:26.519827 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14"} err="failed to get container status \"63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14\": rpc error: code = NotFound desc = could not find container \"63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14\": container with ID starting with 63d721bdb5514137db2b0927bc129a1f01e9a301fa7c509d1a972cb5a6c4cf14 not found: ID does not exist" Nov 24 22:35:28 crc kubenswrapper[4767]: I1124 22:35:28.334067 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf981511-6313-4684-a2b9-adc784165a65" path="/var/lib/kubelet/pods/cf981511-6313-4684-a2b9-adc784165a65/volumes" Nov 24 22:37:05 crc kubenswrapper[4767]: I1124 22:37:05.481412 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:37:05 crc kubenswrapper[4767]: I1124 22:37:05.482166 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:37:35 crc kubenswrapper[4767]: I1124 22:37:35.481164 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:37:35 crc kubenswrapper[4767]: I1124 22:37:35.482336 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.481159 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.481811 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.481897 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.483099 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.483184 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" gracePeriod=600 Nov 24 22:38:05 crc kubenswrapper[4767]: E1124 22:38:05.606529 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.667868 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" exitCode=0 Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.667947 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795"} Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.667985 4767 scope.go:117] "RemoveContainer" containerID="92645578fecb8c9e494395be1fe3d3037ad4ba9382efce850976d1579e6640b8" Nov 24 22:38:05 crc kubenswrapper[4767]: I1124 22:38:05.676676 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:38:05 crc kubenswrapper[4767]: E1124 22:38:05.680775 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:38:20 crc kubenswrapper[4767]: I1124 22:38:20.315082 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:38:20 crc kubenswrapper[4767]: E1124 22:38:20.316086 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:38:32 crc kubenswrapper[4767]: I1124 22:38:32.313714 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:38:32 crc kubenswrapper[4767]: E1124 22:38:32.314912 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:38:44 crc kubenswrapper[4767]: I1124 22:38:44.314569 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:38:44 crc kubenswrapper[4767]: E1124 22:38:44.315698 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.019063 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s4hxs"] Nov 24 22:38:55 crc kubenswrapper[4767]: E1124 22:38:55.020783 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="extract-utilities" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.020822 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="extract-utilities" Nov 24 22:38:55 crc kubenswrapper[4767]: E1124 22:38:55.020888 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="registry-server" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.020905 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="registry-server" Nov 24 22:38:55 crc kubenswrapper[4767]: E1124 22:38:55.020918 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="registry-server" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.020929 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="registry-server" Nov 24 22:38:55 crc kubenswrapper[4767]: E1124 22:38:55.020954 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="extract-utilities" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.020965 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="extract-utilities" Nov 24 22:38:55 crc kubenswrapper[4767]: E1124 22:38:55.021000 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="extract-content" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.021012 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="extract-content" Nov 24 22:38:55 crc kubenswrapper[4767]: E1124 22:38:55.021040 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="extract-content" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.021051 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="extract-content" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.021406 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1a97cc-7db4-4c18-98bc-92f2d5e75030" containerName="registry-server" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.021445 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf981511-6313-4684-a2b9-adc784165a65" containerName="registry-server" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.023952 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.041633 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4hxs"] Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.179096 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-catalog-content\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.179212 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwpg\" (UniqueName: \"kubernetes.io/projected/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-kube-api-access-4bwpg\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.179400 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-utilities\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.281315 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-catalog-content\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.281393 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bwpg\" (UniqueName: \"kubernetes.io/projected/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-kube-api-access-4bwpg\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.281482 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-utilities\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.282169 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-utilities\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.282332 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-catalog-content\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.307223 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bwpg\" (UniqueName: \"kubernetes.io/projected/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-kube-api-access-4bwpg\") pod \"community-operators-s4hxs\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.355528 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:38:55 crc kubenswrapper[4767]: I1124 22:38:55.886216 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4hxs"] Nov 24 22:38:55 crc kubenswrapper[4767]: W1124 22:38:55.892103 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf577b9e7_ef2b_4e79_8f6b_8d2fecbecc1c.slice/crio-27ab157c65f320833278c698e02c9751c2f93ad64608dc68dc58769f1b520c42 WatchSource:0}: Error finding container 27ab157c65f320833278c698e02c9751c2f93ad64608dc68dc58769f1b520c42: Status 404 returned error can't find the container with id 27ab157c65f320833278c698e02c9751c2f93ad64608dc68dc58769f1b520c42 Nov 24 22:38:56 crc kubenswrapper[4767]: I1124 22:38:56.228397 4767 generic.go:334] "Generic (PLEG): container finished" podID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerID="cc8aa72ecdb39cae37cc03044a6b785acb9a8432e5b28362cdd0ea3da189c1ea" exitCode=0 Nov 24 22:38:56 crc kubenswrapper[4767]: I1124 22:38:56.228444 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4hxs" event={"ID":"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c","Type":"ContainerDied","Data":"cc8aa72ecdb39cae37cc03044a6b785acb9a8432e5b28362cdd0ea3da189c1ea"} Nov 24 22:38:56 crc kubenswrapper[4767]: I1124 22:38:56.228470 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4hxs" event={"ID":"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c","Type":"ContainerStarted","Data":"27ab157c65f320833278c698e02c9751c2f93ad64608dc68dc58769f1b520c42"} Nov 24 22:38:56 crc kubenswrapper[4767]: I1124 22:38:56.230816 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:38:57 crc kubenswrapper[4767]: I1124 22:38:57.243532 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4hxs" event={"ID":"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c","Type":"ContainerStarted","Data":"5c5d8fc3e3e40ac1769d134b95f6101c4b2872c464d203f26b9564ebabc3ede1"} Nov 24 22:38:59 crc kubenswrapper[4767]: I1124 22:38:59.276874 4767 generic.go:334] "Generic (PLEG): container finished" podID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerID="5c5d8fc3e3e40ac1769d134b95f6101c4b2872c464d203f26b9564ebabc3ede1" exitCode=0 Nov 24 22:38:59 crc kubenswrapper[4767]: I1124 22:38:59.276960 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4hxs" event={"ID":"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c","Type":"ContainerDied","Data":"5c5d8fc3e3e40ac1769d134b95f6101c4b2872c464d203f26b9564ebabc3ede1"} Nov 24 22:38:59 crc kubenswrapper[4767]: I1124 22:38:59.314594 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:38:59 crc kubenswrapper[4767]: E1124 22:38:59.315127 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:39:00 crc kubenswrapper[4767]: I1124 22:39:00.291033 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4hxs" event={"ID":"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c","Type":"ContainerStarted","Data":"a526905d088b9046c4e962fe5a5c6e10db91e83f290a4545676505877bf51bb9"} Nov 24 22:39:00 crc kubenswrapper[4767]: I1124 22:39:00.318644 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s4hxs" podStartSLOduration=2.629499408 podStartE2EDuration="6.318617759s" podCreationTimestamp="2025-11-24 22:38:54 +0000 UTC" firstStartedPulling="2025-11-24 22:38:56.230296342 +0000 UTC m=+3619.147279754" lastFinishedPulling="2025-11-24 22:38:59.919414733 +0000 UTC m=+3622.836398105" observedRunningTime="2025-11-24 22:39:00.31545707 +0000 UTC m=+3623.232440482" watchObservedRunningTime="2025-11-24 22:39:00.318617759 +0000 UTC m=+3623.235601151" Nov 24 22:39:05 crc kubenswrapper[4767]: I1124 22:39:05.356607 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:39:05 crc kubenswrapper[4767]: I1124 22:39:05.357241 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:39:06 crc kubenswrapper[4767]: I1124 22:39:06.433114 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s4hxs" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="registry-server" probeResult="failure" output=< Nov 24 22:39:06 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 22:39:06 crc kubenswrapper[4767]: > Nov 24 22:39:13 crc kubenswrapper[4767]: I1124 22:39:13.313479 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:39:13 crc kubenswrapper[4767]: E1124 22:39:13.314307 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:39:15 crc kubenswrapper[4767]: I1124 22:39:15.423567 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:39:15 crc kubenswrapper[4767]: I1124 22:39:15.484123 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:39:15 crc kubenswrapper[4767]: I1124 22:39:15.665079 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4hxs"] Nov 24 22:39:16 crc kubenswrapper[4767]: I1124 22:39:16.454153 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s4hxs" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="registry-server" containerID="cri-o://a526905d088b9046c4e962fe5a5c6e10db91e83f290a4545676505877bf51bb9" gracePeriod=2 Nov 24 22:39:17 crc kubenswrapper[4767]: I1124 22:39:17.471337 4767 generic.go:334] "Generic (PLEG): container finished" podID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerID="a526905d088b9046c4e962fe5a5c6e10db91e83f290a4545676505877bf51bb9" exitCode=0 Nov 24 22:39:17 crc kubenswrapper[4767]: I1124 22:39:17.471462 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4hxs" event={"ID":"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c","Type":"ContainerDied","Data":"a526905d088b9046c4e962fe5a5c6e10db91e83f290a4545676505877bf51bb9"} Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.067532 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.198399 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-catalog-content\") pod \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.198494 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-utilities\") pod \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.198553 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bwpg\" (UniqueName: \"kubernetes.io/projected/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-kube-api-access-4bwpg\") pod \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\" (UID: \"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c\") " Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.199306 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-utilities" (OuterVolumeSpecName: "utilities") pod "f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" (UID: "f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.205159 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-kube-api-access-4bwpg" (OuterVolumeSpecName: "kube-api-access-4bwpg") pod "f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" (UID: "f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c"). InnerVolumeSpecName "kube-api-access-4bwpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.250135 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" (UID: "f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.301824 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.301855 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.301871 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bwpg\" (UniqueName: \"kubernetes.io/projected/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c-kube-api-access-4bwpg\") on node \"crc\" DevicePath \"\"" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.482134 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4hxs" event={"ID":"f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c","Type":"ContainerDied","Data":"27ab157c65f320833278c698e02c9751c2f93ad64608dc68dc58769f1b520c42"} Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.482198 4767 scope.go:117] "RemoveContainer" containerID="a526905d088b9046c4e962fe5a5c6e10db91e83f290a4545676505877bf51bb9" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.482195 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4hxs" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.508580 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4hxs"] Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.509735 4767 scope.go:117] "RemoveContainer" containerID="5c5d8fc3e3e40ac1769d134b95f6101c4b2872c464d203f26b9564ebabc3ede1" Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.515397 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s4hxs"] Nov 24 22:39:18 crc kubenswrapper[4767]: I1124 22:39:18.533898 4767 scope.go:117] "RemoveContainer" containerID="cc8aa72ecdb39cae37cc03044a6b785acb9a8432e5b28362cdd0ea3da189c1ea" Nov 24 22:39:20 crc kubenswrapper[4767]: I1124 22:39:20.334206 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" path="/var/lib/kubelet/pods/f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c/volumes" Nov 24 22:39:24 crc kubenswrapper[4767]: I1124 22:39:24.314357 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:39:24 crc kubenswrapper[4767]: E1124 22:39:24.314976 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:39:39 crc kubenswrapper[4767]: I1124 22:39:39.313403 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:39:39 crc kubenswrapper[4767]: E1124 22:39:39.314183 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:39:50 crc kubenswrapper[4767]: I1124 22:39:50.314377 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:39:50 crc kubenswrapper[4767]: E1124 22:39:50.315623 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:40:04 crc kubenswrapper[4767]: I1124 22:40:04.314524 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:40:04 crc kubenswrapper[4767]: E1124 22:40:04.315591 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:40:15 crc kubenswrapper[4767]: I1124 22:40:15.314356 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:40:15 crc kubenswrapper[4767]: E1124 22:40:15.315417 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:40:29 crc kubenswrapper[4767]: I1124 22:40:29.314462 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:40:29 crc kubenswrapper[4767]: E1124 22:40:29.315853 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:40:43 crc kubenswrapper[4767]: I1124 22:40:43.314097 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:40:43 crc kubenswrapper[4767]: E1124 22:40:43.315105 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:40:54 crc kubenswrapper[4767]: I1124 22:40:54.313571 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:40:54 crc kubenswrapper[4767]: E1124 22:40:54.314662 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:41:07 crc kubenswrapper[4767]: I1124 22:41:07.315254 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:41:07 crc kubenswrapper[4767]: E1124 22:41:07.316577 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:41:18 crc kubenswrapper[4767]: I1124 22:41:18.324315 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:41:18 crc kubenswrapper[4767]: E1124 22:41:18.325180 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:41:32 crc kubenswrapper[4767]: I1124 22:41:32.314609 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:41:32 crc kubenswrapper[4767]: E1124 22:41:32.315777 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:41:47 crc kubenswrapper[4767]: I1124 22:41:47.315492 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:41:47 crc kubenswrapper[4767]: E1124 22:41:47.316564 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:41:55 crc kubenswrapper[4767]: I1124 22:41:55.988536 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xtvmf"] Nov 24 22:41:55 crc kubenswrapper[4767]: E1124 22:41:55.989747 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="extract-content" Nov 24 22:41:55 crc kubenswrapper[4767]: I1124 22:41:55.989770 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="extract-content" Nov 24 22:41:55 crc kubenswrapper[4767]: E1124 22:41:55.989805 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="registry-server" Nov 24 22:41:55 crc kubenswrapper[4767]: I1124 22:41:55.989818 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="registry-server" Nov 24 22:41:55 crc kubenswrapper[4767]: E1124 22:41:55.989855 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="extract-utilities" Nov 24 22:41:55 crc kubenswrapper[4767]: I1124 22:41:55.989868 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="extract-utilities" Nov 24 22:41:55 crc kubenswrapper[4767]: I1124 22:41:55.990223 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f577b9e7-ef2b-4e79-8f6b-8d2fecbecc1c" containerName="registry-server" Nov 24 22:41:55 crc kubenswrapper[4767]: I1124 22:41:55.992687 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.011250 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtvmf"] Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.118124 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-catalog-content\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.118171 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhm8\" (UniqueName: \"kubernetes.io/projected/9b7810e0-df5d-4798-98f1-844bfd83afe7-kube-api-access-zfhm8\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.118610 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-utilities\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.221171 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-utilities\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.221256 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-catalog-content\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.221300 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhm8\" (UniqueName: \"kubernetes.io/projected/9b7810e0-df5d-4798-98f1-844bfd83afe7-kube-api-access-zfhm8\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.221718 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-utilities\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.221983 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-catalog-content\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.244826 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhm8\" (UniqueName: \"kubernetes.io/projected/9b7810e0-df5d-4798-98f1-844bfd83afe7-kube-api-access-zfhm8\") pod \"redhat-marketplace-xtvmf\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.328526 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:41:56 crc kubenswrapper[4767]: I1124 22:41:56.892878 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtvmf"] Nov 24 22:41:56 crc kubenswrapper[4767]: W1124 22:41:56.901831 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7810e0_df5d_4798_98f1_844bfd83afe7.slice/crio-2c0fb4e967b81da40719ca5a6d566956f694344793a99b00e5ed0dcfe617e01a WatchSource:0}: Error finding container 2c0fb4e967b81da40719ca5a6d566956f694344793a99b00e5ed0dcfe617e01a: Status 404 returned error can't find the container with id 2c0fb4e967b81da40719ca5a6d566956f694344793a99b00e5ed0dcfe617e01a Nov 24 22:41:57 crc kubenswrapper[4767]: I1124 22:41:57.413858 4767 generic.go:334] "Generic (PLEG): container finished" podID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerID="bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603" exitCode=0 Nov 24 22:41:57 crc kubenswrapper[4767]: I1124 22:41:57.413933 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtvmf" event={"ID":"9b7810e0-df5d-4798-98f1-844bfd83afe7","Type":"ContainerDied","Data":"bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603"} Nov 24 22:41:57 crc kubenswrapper[4767]: I1124 22:41:57.415117 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtvmf" event={"ID":"9b7810e0-df5d-4798-98f1-844bfd83afe7","Type":"ContainerStarted","Data":"2c0fb4e967b81da40719ca5a6d566956f694344793a99b00e5ed0dcfe617e01a"} Nov 24 22:41:58 crc kubenswrapper[4767]: I1124 22:41:58.426351 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtvmf" event={"ID":"9b7810e0-df5d-4798-98f1-844bfd83afe7","Type":"ContainerStarted","Data":"490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a"} Nov 24 22:41:59 crc kubenswrapper[4767]: I1124 22:41:59.313697 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:41:59 crc kubenswrapper[4767]: E1124 22:41:59.314438 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:41:59 crc kubenswrapper[4767]: I1124 22:41:59.442695 4767 generic.go:334] "Generic (PLEG): container finished" podID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerID="490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a" exitCode=0 Nov 24 22:41:59 crc kubenswrapper[4767]: I1124 22:41:59.442755 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtvmf" event={"ID":"9b7810e0-df5d-4798-98f1-844bfd83afe7","Type":"ContainerDied","Data":"490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a"} Nov 24 22:42:00 crc kubenswrapper[4767]: I1124 22:42:00.463171 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtvmf" event={"ID":"9b7810e0-df5d-4798-98f1-844bfd83afe7","Type":"ContainerStarted","Data":"3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118"} Nov 24 22:42:00 crc kubenswrapper[4767]: I1124 22:42:00.494707 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xtvmf" podStartSLOduration=2.995074657 podStartE2EDuration="5.494686039s" podCreationTimestamp="2025-11-24 22:41:55 +0000 UTC" firstStartedPulling="2025-11-24 22:41:57.416502266 +0000 UTC m=+3800.333485638" lastFinishedPulling="2025-11-24 22:41:59.916113608 +0000 UTC m=+3802.833097020" observedRunningTime="2025-11-24 22:42:00.492910829 +0000 UTC m=+3803.409894211" watchObservedRunningTime="2025-11-24 22:42:00.494686039 +0000 UTC m=+3803.411669411" Nov 24 22:42:06 crc kubenswrapper[4767]: I1124 22:42:06.330114 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:42:06 crc kubenswrapper[4767]: I1124 22:42:06.330594 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:42:06 crc kubenswrapper[4767]: I1124 22:42:06.404558 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:42:06 crc kubenswrapper[4767]: I1124 22:42:06.625678 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:42:06 crc kubenswrapper[4767]: I1124 22:42:06.707325 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtvmf"] Nov 24 22:42:08 crc kubenswrapper[4767]: I1124 22:42:08.554792 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xtvmf" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="registry-server" containerID="cri-o://3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118" gracePeriod=2 Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.054691 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.130573 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-catalog-content\") pod \"9b7810e0-df5d-4798-98f1-844bfd83afe7\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.130832 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-utilities\") pod \"9b7810e0-df5d-4798-98f1-844bfd83afe7\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.131862 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-utilities" (OuterVolumeSpecName: "utilities") pod "9b7810e0-df5d-4798-98f1-844bfd83afe7" (UID: "9b7810e0-df5d-4798-98f1-844bfd83afe7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.131893 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfhm8\" (UniqueName: \"kubernetes.io/projected/9b7810e0-df5d-4798-98f1-844bfd83afe7-kube-api-access-zfhm8\") pod \"9b7810e0-df5d-4798-98f1-844bfd83afe7\" (UID: \"9b7810e0-df5d-4798-98f1-844bfd83afe7\") " Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.133053 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.138717 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7810e0-df5d-4798-98f1-844bfd83afe7-kube-api-access-zfhm8" (OuterVolumeSpecName: "kube-api-access-zfhm8") pod "9b7810e0-df5d-4798-98f1-844bfd83afe7" (UID: "9b7810e0-df5d-4798-98f1-844bfd83afe7"). InnerVolumeSpecName "kube-api-access-zfhm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.148751 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b7810e0-df5d-4798-98f1-844bfd83afe7" (UID: "9b7810e0-df5d-4798-98f1-844bfd83afe7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.235416 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7810e0-df5d-4798-98f1-844bfd83afe7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.235482 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfhm8\" (UniqueName: \"kubernetes.io/projected/9b7810e0-df5d-4798-98f1-844bfd83afe7-kube-api-access-zfhm8\") on node \"crc\" DevicePath \"\"" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.570312 4767 generic.go:334] "Generic (PLEG): container finished" podID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerID="3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118" exitCode=0 Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.570399 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtvmf" event={"ID":"9b7810e0-df5d-4798-98f1-844bfd83afe7","Type":"ContainerDied","Data":"3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118"} Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.570478 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtvmf" event={"ID":"9b7810e0-df5d-4798-98f1-844bfd83afe7","Type":"ContainerDied","Data":"2c0fb4e967b81da40719ca5a6d566956f694344793a99b00e5ed0dcfe617e01a"} Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.570506 4767 scope.go:117] "RemoveContainer" containerID="3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.570540 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtvmf" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.606511 4767 scope.go:117] "RemoveContainer" containerID="490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.648572 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtvmf"] Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.652655 4767 scope.go:117] "RemoveContainer" containerID="bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.664967 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtvmf"] Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.697549 4767 scope.go:117] "RemoveContainer" containerID="3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118" Nov 24 22:42:09 crc kubenswrapper[4767]: E1124 22:42:09.698030 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118\": container with ID starting with 3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118 not found: ID does not exist" containerID="3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.698064 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118"} err="failed to get container status \"3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118\": rpc error: code = NotFound desc = could not find container \"3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118\": container with ID starting with 3cbada9dfde42222b6e4008c4a9961eaebfc6d7a60db62594929f24427b83118 not found: ID does not exist" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.698087 4767 scope.go:117] "RemoveContainer" containerID="490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a" Nov 24 22:42:09 crc kubenswrapper[4767]: E1124 22:42:09.698568 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a\": container with ID starting with 490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a not found: ID does not exist" containerID="490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.698592 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a"} err="failed to get container status \"490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a\": rpc error: code = NotFound desc = could not find container \"490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a\": container with ID starting with 490422684dae762ae2cc47a85eda36508caf4b1f7f4f60e76f6dd39c8aeaf47a not found: ID does not exist" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.698605 4767 scope.go:117] "RemoveContainer" containerID="bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603" Nov 24 22:42:09 crc kubenswrapper[4767]: E1124 22:42:09.699099 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603\": container with ID starting with bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603 not found: ID does not exist" containerID="bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603" Nov 24 22:42:09 crc kubenswrapper[4767]: I1124 22:42:09.699165 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603"} err="failed to get container status \"bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603\": rpc error: code = NotFound desc = could not find container \"bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603\": container with ID starting with bd8811e4885f4c41d77b3e53bcd993c9029536f6f50fcc50a34814f80c18e603 not found: ID does not exist" Nov 24 22:42:10 crc kubenswrapper[4767]: I1124 22:42:10.327676 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" path="/var/lib/kubelet/pods/9b7810e0-df5d-4798-98f1-844bfd83afe7/volumes" Nov 24 22:42:14 crc kubenswrapper[4767]: I1124 22:42:14.313981 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:42:14 crc kubenswrapper[4767]: E1124 22:42:14.314746 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:42:25 crc kubenswrapper[4767]: I1124 22:42:25.312923 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:42:25 crc kubenswrapper[4767]: E1124 22:42:25.313769 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:42:37 crc kubenswrapper[4767]: I1124 22:42:37.313850 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:42:37 crc kubenswrapper[4767]: E1124 22:42:37.314966 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:42:50 crc kubenswrapper[4767]: I1124 22:42:50.314095 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:42:50 crc kubenswrapper[4767]: E1124 22:42:50.316184 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:43:03 crc kubenswrapper[4767]: I1124 22:43:03.313189 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:43:03 crc kubenswrapper[4767]: E1124 22:43:03.314306 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:43:15 crc kubenswrapper[4767]: I1124 22:43:15.313611 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:43:16 crc kubenswrapper[4767]: I1124 22:43:16.334588 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"2e8b085ae95296df8e0f660f5fcb933ab370c6b3e404e71573accde8147048f9"} Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.157020 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm"] Nov 24 22:45:00 crc kubenswrapper[4767]: E1124 22:45:00.158616 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="extract-utilities" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.158651 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="extract-utilities" Nov 24 22:45:00 crc kubenswrapper[4767]: E1124 22:45:00.158728 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.158747 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4767]: E1124 22:45:00.158783 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="extract-content" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.158801 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="extract-content" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.159340 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7810e0-df5d-4798-98f1-844bfd83afe7" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.160818 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.163930 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.164334 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.169993 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm"] Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.285848 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60baf41d-8aa3-4b07-a344-a3357d37ca4d-secret-volume\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.286074 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89889\" (UniqueName: \"kubernetes.io/projected/60baf41d-8aa3-4b07-a344-a3357d37ca4d-kube-api-access-89889\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.286161 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60baf41d-8aa3-4b07-a344-a3357d37ca4d-config-volume\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.389014 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60baf41d-8aa3-4b07-a344-a3357d37ca4d-secret-volume\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.389554 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89889\" (UniqueName: \"kubernetes.io/projected/60baf41d-8aa3-4b07-a344-a3357d37ca4d-kube-api-access-89889\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.389659 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60baf41d-8aa3-4b07-a344-a3357d37ca4d-config-volume\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.391408 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60baf41d-8aa3-4b07-a344-a3357d37ca4d-config-volume\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.394701 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60baf41d-8aa3-4b07-a344-a3357d37ca4d-secret-volume\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.410171 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89889\" (UniqueName: \"kubernetes.io/projected/60baf41d-8aa3-4b07-a344-a3357d37ca4d-kube-api-access-89889\") pod \"collect-profiles-29400405-xh5jm\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.484825 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:00 crc kubenswrapper[4767]: I1124 22:45:00.952313 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm"] Nov 24 22:45:01 crc kubenswrapper[4767]: I1124 22:45:01.580037 4767 generic.go:334] "Generic (PLEG): container finished" podID="60baf41d-8aa3-4b07-a344-a3357d37ca4d" containerID="111b8465fb1b39b8f767442e11e334c4b3d3f6b52cfddf6b5ca9a679bf9201a7" exitCode=0 Nov 24 22:45:01 crc kubenswrapper[4767]: I1124 22:45:01.580403 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" event={"ID":"60baf41d-8aa3-4b07-a344-a3357d37ca4d","Type":"ContainerDied","Data":"111b8465fb1b39b8f767442e11e334c4b3d3f6b52cfddf6b5ca9a679bf9201a7"} Nov 24 22:45:01 crc kubenswrapper[4767]: I1124 22:45:01.580434 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" event={"ID":"60baf41d-8aa3-4b07-a344-a3357d37ca4d","Type":"ContainerStarted","Data":"4811551c62946b282b435652c0d691a1402293378b55e68fe35db392d05854ce"} Nov 24 22:45:02 crc kubenswrapper[4767]: I1124 22:45:02.960144 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.051494 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89889\" (UniqueName: \"kubernetes.io/projected/60baf41d-8aa3-4b07-a344-a3357d37ca4d-kube-api-access-89889\") pod \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.051753 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60baf41d-8aa3-4b07-a344-a3357d37ca4d-config-volume\") pod \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.051905 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60baf41d-8aa3-4b07-a344-a3357d37ca4d-secret-volume\") pod \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\" (UID: \"60baf41d-8aa3-4b07-a344-a3357d37ca4d\") " Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.052463 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60baf41d-8aa3-4b07-a344-a3357d37ca4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "60baf41d-8aa3-4b07-a344-a3357d37ca4d" (UID: "60baf41d-8aa3-4b07-a344-a3357d37ca4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.052792 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60baf41d-8aa3-4b07-a344-a3357d37ca4d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.059219 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60baf41d-8aa3-4b07-a344-a3357d37ca4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60baf41d-8aa3-4b07-a344-a3357d37ca4d" (UID: "60baf41d-8aa3-4b07-a344-a3357d37ca4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.059887 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60baf41d-8aa3-4b07-a344-a3357d37ca4d-kube-api-access-89889" (OuterVolumeSpecName: "kube-api-access-89889") pod "60baf41d-8aa3-4b07-a344-a3357d37ca4d" (UID: "60baf41d-8aa3-4b07-a344-a3357d37ca4d"). InnerVolumeSpecName "kube-api-access-89889". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.154610 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89889\" (UniqueName: \"kubernetes.io/projected/60baf41d-8aa3-4b07-a344-a3357d37ca4d-kube-api-access-89889\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.154668 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60baf41d-8aa3-4b07-a344-a3357d37ca4d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.598487 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" event={"ID":"60baf41d-8aa3-4b07-a344-a3357d37ca4d","Type":"ContainerDied","Data":"4811551c62946b282b435652c0d691a1402293378b55e68fe35db392d05854ce"} Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.598529 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4811551c62946b282b435652c0d691a1402293378b55e68fe35db392d05854ce" Nov 24 22:45:03 crc kubenswrapper[4767]: I1124 22:45:03.598543 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm" Nov 24 22:45:04 crc kubenswrapper[4767]: I1124 22:45:04.071801 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78"] Nov 24 22:45:04 crc kubenswrapper[4767]: I1124 22:45:04.088940 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-nhn78"] Nov 24 22:45:04 crc kubenswrapper[4767]: I1124 22:45:04.336614 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96" path="/var/lib/kubelet/pods/bb3e6da3-cf34-4cd1-ab99-c5d4eb025c96/volumes" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.640848 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kv89k"] Nov 24 22:45:27 crc kubenswrapper[4767]: E1124 22:45:27.642111 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60baf41d-8aa3-4b07-a344-a3357d37ca4d" containerName="collect-profiles" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.642134 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="60baf41d-8aa3-4b07-a344-a3357d37ca4d" containerName="collect-profiles" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.642594 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="60baf41d-8aa3-4b07-a344-a3357d37ca4d" containerName="collect-profiles" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.645157 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.664992 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kv89k"] Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.795242 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-catalog-content\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.795396 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57k2g\" (UniqueName: \"kubernetes.io/projected/e3522834-349f-4fe4-b693-e00786768403-kube-api-access-57k2g\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.795559 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-utilities\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.897893 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57k2g\" (UniqueName: \"kubernetes.io/projected/e3522834-349f-4fe4-b693-e00786768403-kube-api-access-57k2g\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.897931 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-catalog-content\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.897968 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-utilities\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.898521 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-utilities\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.898694 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-catalog-content\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.917701 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57k2g\" (UniqueName: \"kubernetes.io/projected/e3522834-349f-4fe4-b693-e00786768403-kube-api-access-57k2g\") pod \"certified-operators-kv89k\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:27 crc kubenswrapper[4767]: I1124 22:45:27.990174 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:28 crc kubenswrapper[4767]: I1124 22:45:28.490376 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kv89k"] Nov 24 22:45:28 crc kubenswrapper[4767]: I1124 22:45:28.866547 4767 generic.go:334] "Generic (PLEG): container finished" podID="e3522834-349f-4fe4-b693-e00786768403" containerID="0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650" exitCode=0 Nov 24 22:45:28 crc kubenswrapper[4767]: I1124 22:45:28.866691 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kv89k" event={"ID":"e3522834-349f-4fe4-b693-e00786768403","Type":"ContainerDied","Data":"0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650"} Nov 24 22:45:28 crc kubenswrapper[4767]: I1124 22:45:28.866948 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kv89k" event={"ID":"e3522834-349f-4fe4-b693-e00786768403","Type":"ContainerStarted","Data":"f18f171464393026f36fe570abdbc401e1af1844a87517c48b16d17c6678d2bd"} Nov 24 22:45:28 crc kubenswrapper[4767]: I1124 22:45:28.868638 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:45:30 crc kubenswrapper[4767]: I1124 22:45:30.892742 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kv89k" event={"ID":"e3522834-349f-4fe4-b693-e00786768403","Type":"ContainerStarted","Data":"165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce"} Nov 24 22:45:31 crc kubenswrapper[4767]: I1124 22:45:31.905913 4767 generic.go:334] "Generic (PLEG): container finished" podID="e3522834-349f-4fe4-b693-e00786768403" containerID="165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce" exitCode=0 Nov 24 22:45:31 crc kubenswrapper[4767]: I1124 22:45:31.905964 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kv89k" event={"ID":"e3522834-349f-4fe4-b693-e00786768403","Type":"ContainerDied","Data":"165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce"} Nov 24 22:45:32 crc kubenswrapper[4767]: I1124 22:45:32.918429 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kv89k" event={"ID":"e3522834-349f-4fe4-b693-e00786768403","Type":"ContainerStarted","Data":"6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88"} Nov 24 22:45:32 crc kubenswrapper[4767]: I1124 22:45:32.936755 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kv89k" podStartSLOduration=2.47428493 podStartE2EDuration="5.936735997s" podCreationTimestamp="2025-11-24 22:45:27 +0000 UTC" firstStartedPulling="2025-11-24 22:45:28.868435486 +0000 UTC m=+4011.785418858" lastFinishedPulling="2025-11-24 22:45:32.330886513 +0000 UTC m=+4015.247869925" observedRunningTime="2025-11-24 22:45:32.936652645 +0000 UTC m=+4015.853636037" watchObservedRunningTime="2025-11-24 22:45:32.936735997 +0000 UTC m=+4015.853719359" Nov 24 22:45:35 crc kubenswrapper[4767]: I1124 22:45:35.481521 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:45:35 crc kubenswrapper[4767]: I1124 22:45:35.481911 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:45:37 crc kubenswrapper[4767]: I1124 22:45:37.991081 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:37 crc kubenswrapper[4767]: I1124 22:45:37.991739 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:38 crc kubenswrapper[4767]: I1124 22:45:38.067219 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:39 crc kubenswrapper[4767]: I1124 22:45:39.064088 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:39 crc kubenswrapper[4767]: I1124 22:45:39.126394 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kv89k"] Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.002109 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kv89k" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="registry-server" containerID="cri-o://6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88" gracePeriod=2 Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.575979 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.693818 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-utilities\") pod \"e3522834-349f-4fe4-b693-e00786768403\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.693899 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57k2g\" (UniqueName: \"kubernetes.io/projected/e3522834-349f-4fe4-b693-e00786768403-kube-api-access-57k2g\") pod \"e3522834-349f-4fe4-b693-e00786768403\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.693936 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-catalog-content\") pod \"e3522834-349f-4fe4-b693-e00786768403\" (UID: \"e3522834-349f-4fe4-b693-e00786768403\") " Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.695227 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-utilities" (OuterVolumeSpecName: "utilities") pod "e3522834-349f-4fe4-b693-e00786768403" (UID: "e3522834-349f-4fe4-b693-e00786768403"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.702750 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3522834-349f-4fe4-b693-e00786768403-kube-api-access-57k2g" (OuterVolumeSpecName: "kube-api-access-57k2g") pod "e3522834-349f-4fe4-b693-e00786768403" (UID: "e3522834-349f-4fe4-b693-e00786768403"). InnerVolumeSpecName "kube-api-access-57k2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.735568 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3522834-349f-4fe4-b693-e00786768403" (UID: "e3522834-349f-4fe4-b693-e00786768403"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.796845 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.796900 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57k2g\" (UniqueName: \"kubernetes.io/projected/e3522834-349f-4fe4-b693-e00786768403-kube-api-access-57k2g\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:41 crc kubenswrapper[4767]: I1124 22:45:41.796923 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3522834-349f-4fe4-b693-e00786768403-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.016617 4767 generic.go:334] "Generic (PLEG): container finished" podID="e3522834-349f-4fe4-b693-e00786768403" containerID="6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88" exitCode=0 Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.016675 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kv89k" event={"ID":"e3522834-349f-4fe4-b693-e00786768403","Type":"ContainerDied","Data":"6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88"} Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.016696 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kv89k" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.016714 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kv89k" event={"ID":"e3522834-349f-4fe4-b693-e00786768403","Type":"ContainerDied","Data":"f18f171464393026f36fe570abdbc401e1af1844a87517c48b16d17c6678d2bd"} Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.016745 4767 scope.go:117] "RemoveContainer" containerID="6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.048154 4767 scope.go:117] "RemoveContainer" containerID="165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.068539 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kv89k"] Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.084137 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kv89k"] Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.086323 4767 scope.go:117] "RemoveContainer" containerID="0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.155536 4767 scope.go:117] "RemoveContainer" containerID="6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88" Nov 24 22:45:42 crc kubenswrapper[4767]: E1124 22:45:42.156493 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88\": container with ID starting with 6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88 not found: ID does not exist" containerID="6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.156539 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88"} err="failed to get container status \"6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88\": rpc error: code = NotFound desc = could not find container \"6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88\": container with ID starting with 6473f924fdfa66fc6a2adf6d439877ddd36778c5cbc6adf7efa4cc5017ca7d88 not found: ID does not exist" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.156572 4767 scope.go:117] "RemoveContainer" containerID="165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce" Nov 24 22:45:42 crc kubenswrapper[4767]: E1124 22:45:42.157012 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce\": container with ID starting with 165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce not found: ID does not exist" containerID="165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.157064 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce"} err="failed to get container status \"165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce\": rpc error: code = NotFound desc = could not find container \"165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce\": container with ID starting with 165e0e0fc6051c08ea5cf9cfc558150a36d3667d1210c4c2c5fe4715afe975ce not found: ID does not exist" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.157091 4767 scope.go:117] "RemoveContainer" containerID="0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650" Nov 24 22:45:42 crc kubenswrapper[4767]: E1124 22:45:42.157415 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650\": container with ID starting with 0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650 not found: ID does not exist" containerID="0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.157460 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650"} err="failed to get container status \"0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650\": rpc error: code = NotFound desc = could not find container \"0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650\": container with ID starting with 0dcf4f063de653b1a817d8f66b8b2bacc521d0b419ad79832bfc385750a12650 not found: ID does not exist" Nov 24 22:45:42 crc kubenswrapper[4767]: I1124 22:45:42.329533 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3522834-349f-4fe4-b693-e00786768403" path="/var/lib/kubelet/pods/e3522834-349f-4fe4-b693-e00786768403/volumes" Nov 24 22:45:44 crc kubenswrapper[4767]: I1124 22:45:44.397824 4767 scope.go:117] "RemoveContainer" containerID="6f9cc36295d186a8cb966db9448f2738ec51701eb1944b987ed77f8b282cd72c" Nov 24 22:46:05 crc kubenswrapper[4767]: I1124 22:46:05.481432 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:46:05 crc kubenswrapper[4767]: I1124 22:46:05.482251 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:46:35 crc kubenswrapper[4767]: I1124 22:46:35.481842 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:46:35 crc kubenswrapper[4767]: I1124 22:46:35.482441 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:46:35 crc kubenswrapper[4767]: I1124 22:46:35.482491 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:46:35 crc kubenswrapper[4767]: I1124 22:46:35.483367 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e8b085ae95296df8e0f660f5fcb933ab370c6b3e404e71573accde8147048f9"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:46:35 crc kubenswrapper[4767]: I1124 22:46:35.483432 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://2e8b085ae95296df8e0f660f5fcb933ab370c6b3e404e71573accde8147048f9" gracePeriod=600 Nov 24 22:46:36 crc kubenswrapper[4767]: I1124 22:46:36.651143 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="2e8b085ae95296df8e0f660f5fcb933ab370c6b3e404e71573accde8147048f9" exitCode=0 Nov 24 22:46:36 crc kubenswrapper[4767]: I1124 22:46:36.651208 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"2e8b085ae95296df8e0f660f5fcb933ab370c6b3e404e71573accde8147048f9"} Nov 24 22:46:36 crc kubenswrapper[4767]: I1124 22:46:36.651921 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e"} Nov 24 22:46:36 crc kubenswrapper[4767]: I1124 22:46:36.651950 4767 scope.go:117] "RemoveContainer" containerID="33ef40fc87b32e2fccf1e67a46be0baf8693e5f54c4954e48810bbd4d39fb795" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.516327 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xb9jn"] Nov 24 22:48:48 crc kubenswrapper[4767]: E1124 22:48:48.518804 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="extract-content" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.518922 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="extract-content" Nov 24 22:48:48 crc kubenswrapper[4767]: E1124 22:48:48.519024 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="extract-utilities" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.519094 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="extract-utilities" Nov 24 22:48:48 crc kubenswrapper[4767]: E1124 22:48:48.519259 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="registry-server" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.519353 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="registry-server" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.520052 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3522834-349f-4fe4-b693-e00786768403" containerName="registry-server" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.525280 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.542853 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xb9jn"] Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.547819 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzhs6\" (UniqueName: \"kubernetes.io/projected/4d144566-29bc-4466-bf7e-02b62c328adf-kube-api-access-dzhs6\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.547893 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-catalog-content\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.548108 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-utilities\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.649997 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-catalog-content\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.650085 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-utilities\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.650190 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzhs6\" (UniqueName: \"kubernetes.io/projected/4d144566-29bc-4466-bf7e-02b62c328adf-kube-api-access-dzhs6\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.650561 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-catalog-content\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.650637 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-utilities\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.670387 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzhs6\" (UniqueName: \"kubernetes.io/projected/4d144566-29bc-4466-bf7e-02b62c328adf-kube-api-access-dzhs6\") pod \"redhat-operators-xb9jn\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:48 crc kubenswrapper[4767]: I1124 22:48:48.861161 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:49 crc kubenswrapper[4767]: I1124 22:48:49.746607 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xb9jn"] Nov 24 22:48:50 crc kubenswrapper[4767]: I1124 22:48:50.210216 4767 generic.go:334] "Generic (PLEG): container finished" podID="4d144566-29bc-4466-bf7e-02b62c328adf" containerID="0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362" exitCode=0 Nov 24 22:48:50 crc kubenswrapper[4767]: I1124 22:48:50.210355 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb9jn" event={"ID":"4d144566-29bc-4466-bf7e-02b62c328adf","Type":"ContainerDied","Data":"0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362"} Nov 24 22:48:50 crc kubenswrapper[4767]: I1124 22:48:50.210591 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb9jn" event={"ID":"4d144566-29bc-4466-bf7e-02b62c328adf","Type":"ContainerStarted","Data":"a263c0977bb03ce73523fc481878aa79f8a97d448ec768dc5ecb71265b7bbc15"} Nov 24 22:48:52 crc kubenswrapper[4767]: I1124 22:48:52.237492 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb9jn" event={"ID":"4d144566-29bc-4466-bf7e-02b62c328adf","Type":"ContainerStarted","Data":"b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800"} Nov 24 22:48:53 crc kubenswrapper[4767]: I1124 22:48:53.267308 4767 generic.go:334] "Generic (PLEG): container finished" podID="4d144566-29bc-4466-bf7e-02b62c328adf" containerID="b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800" exitCode=0 Nov 24 22:48:53 crc kubenswrapper[4767]: I1124 22:48:53.267407 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb9jn" event={"ID":"4d144566-29bc-4466-bf7e-02b62c328adf","Type":"ContainerDied","Data":"b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800"} Nov 24 22:48:54 crc kubenswrapper[4767]: I1124 22:48:54.280597 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb9jn" event={"ID":"4d144566-29bc-4466-bf7e-02b62c328adf","Type":"ContainerStarted","Data":"7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8"} Nov 24 22:48:54 crc kubenswrapper[4767]: I1124 22:48:54.313694 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xb9jn" podStartSLOduration=2.780235266 podStartE2EDuration="6.313677712s" podCreationTimestamp="2025-11-24 22:48:48 +0000 UTC" firstStartedPulling="2025-11-24 22:48:50.212140921 +0000 UTC m=+4213.129124333" lastFinishedPulling="2025-11-24 22:48:53.745583397 +0000 UTC m=+4216.662566779" observedRunningTime="2025-11-24 22:48:54.303441832 +0000 UTC m=+4217.220425214" watchObservedRunningTime="2025-11-24 22:48:54.313677712 +0000 UTC m=+4217.230661074" Nov 24 22:48:58 crc kubenswrapper[4767]: I1124 22:48:58.861536 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:58 crc kubenswrapper[4767]: I1124 22:48:58.862214 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:48:59 crc kubenswrapper[4767]: I1124 22:48:59.953430 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xb9jn" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="registry-server" probeResult="failure" output=< Nov 24 22:48:59 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 22:48:59 crc kubenswrapper[4767]: > Nov 24 22:49:05 crc kubenswrapper[4767]: I1124 22:49:05.481470 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:49:05 crc kubenswrapper[4767]: I1124 22:49:05.481948 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:49:08 crc kubenswrapper[4767]: I1124 22:49:08.921182 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:49:08 crc kubenswrapper[4767]: I1124 22:49:08.977176 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:49:09 crc kubenswrapper[4767]: I1124 22:49:09.159459 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xb9jn"] Nov 24 22:49:10 crc kubenswrapper[4767]: I1124 22:49:10.448235 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xb9jn" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="registry-server" containerID="cri-o://7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8" gracePeriod=2 Nov 24 22:49:10 crc kubenswrapper[4767]: I1124 22:49:10.993518 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.133634 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-utilities\") pod \"4d144566-29bc-4466-bf7e-02b62c328adf\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.133737 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-catalog-content\") pod \"4d144566-29bc-4466-bf7e-02b62c328adf\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.133886 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzhs6\" (UniqueName: \"kubernetes.io/projected/4d144566-29bc-4466-bf7e-02b62c328adf-kube-api-access-dzhs6\") pod \"4d144566-29bc-4466-bf7e-02b62c328adf\" (UID: \"4d144566-29bc-4466-bf7e-02b62c328adf\") " Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.134545 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-utilities" (OuterVolumeSpecName: "utilities") pod "4d144566-29bc-4466-bf7e-02b62c328adf" (UID: "4d144566-29bc-4466-bf7e-02b62c328adf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.143452 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d144566-29bc-4466-bf7e-02b62c328adf-kube-api-access-dzhs6" (OuterVolumeSpecName: "kube-api-access-dzhs6") pod "4d144566-29bc-4466-bf7e-02b62c328adf" (UID: "4d144566-29bc-4466-bf7e-02b62c328adf"). InnerVolumeSpecName "kube-api-access-dzhs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.237432 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.237481 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzhs6\" (UniqueName: \"kubernetes.io/projected/4d144566-29bc-4466-bf7e-02b62c328adf-kube-api-access-dzhs6\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.255833 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d144566-29bc-4466-bf7e-02b62c328adf" (UID: "4d144566-29bc-4466-bf7e-02b62c328adf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.339771 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d144566-29bc-4466-bf7e-02b62c328adf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.465773 4767 generic.go:334] "Generic (PLEG): container finished" podID="4d144566-29bc-4466-bf7e-02b62c328adf" containerID="7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8" exitCode=0 Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.465835 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb9jn" event={"ID":"4d144566-29bc-4466-bf7e-02b62c328adf","Type":"ContainerDied","Data":"7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8"} Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.465891 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xb9jn" event={"ID":"4d144566-29bc-4466-bf7e-02b62c328adf","Type":"ContainerDied","Data":"a263c0977bb03ce73523fc481878aa79f8a97d448ec768dc5ecb71265b7bbc15"} Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.465911 4767 scope.go:117] "RemoveContainer" containerID="7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.466709 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xb9jn" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.508115 4767 scope.go:117] "RemoveContainer" containerID="b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800" Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.509816 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xb9jn"] Nov 24 22:49:11 crc kubenswrapper[4767]: I1124 22:49:11.520366 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xb9jn"] Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.114523 4767 scope.go:117] "RemoveContainer" containerID="0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362" Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.193567 4767 scope.go:117] "RemoveContainer" containerID="7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8" Nov 24 22:49:12 crc kubenswrapper[4767]: E1124 22:49:12.194092 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8\": container with ID starting with 7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8 not found: ID does not exist" containerID="7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8" Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.194142 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8"} err="failed to get container status \"7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8\": rpc error: code = NotFound desc = could not find container \"7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8\": container with ID starting with 7528701ab5e3cc8bed50c1e04a57c337dc0fc78f77cdef2c0cec0e47e655a4f8 not found: ID does not exist" Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.194168 4767 scope.go:117] "RemoveContainer" containerID="b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800" Nov 24 22:49:12 crc kubenswrapper[4767]: E1124 22:49:12.194606 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800\": container with ID starting with b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800 not found: ID does not exist" containerID="b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800" Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.194632 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800"} err="failed to get container status \"b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800\": rpc error: code = NotFound desc = could not find container \"b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800\": container with ID starting with b0f79935cf1fba932a5a057710368bef87c67d788937cdac0247b681060fc800 not found: ID does not exist" Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.194647 4767 scope.go:117] "RemoveContainer" containerID="0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362" Nov 24 22:49:12 crc kubenswrapper[4767]: E1124 22:49:12.195224 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362\": container with ID starting with 0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362 not found: ID does not exist" containerID="0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362" Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.195248 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362"} err="failed to get container status \"0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362\": rpc error: code = NotFound desc = could not find container \"0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362\": container with ID starting with 0ded9df6f26ab9014534637741f37022a15bca53917491fbc446bf6ac9387362 not found: ID does not exist" Nov 24 22:49:12 crc kubenswrapper[4767]: I1124 22:49:12.327221 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" path="/var/lib/kubelet/pods/4d144566-29bc-4466-bf7e-02b62c328adf/volumes" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.807728 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qw5vn"] Nov 24 22:49:20 crc kubenswrapper[4767]: E1124 22:49:20.808548 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="extract-content" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.808562 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="extract-content" Nov 24 22:49:20 crc kubenswrapper[4767]: E1124 22:49:20.808581 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="extract-utilities" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.808588 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="extract-utilities" Nov 24 22:49:20 crc kubenswrapper[4767]: E1124 22:49:20.808623 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="registry-server" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.808630 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="registry-server" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.808824 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d144566-29bc-4466-bf7e-02b62c328adf" containerName="registry-server" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.810329 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.816866 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qw5vn"] Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.913482 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-catalog-content\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.913571 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-utilities\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:20 crc kubenswrapper[4767]: I1124 22:49:20.913614 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8wb6\" (UniqueName: \"kubernetes.io/projected/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-kube-api-access-d8wb6\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.015774 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8wb6\" (UniqueName: \"kubernetes.io/projected/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-kube-api-access-d8wb6\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.016189 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-catalog-content\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.016381 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-utilities\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.016689 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-catalog-content\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.016800 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-utilities\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.047570 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8wb6\" (UniqueName: \"kubernetes.io/projected/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-kube-api-access-d8wb6\") pod \"community-operators-qw5vn\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.145003 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.657643 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qw5vn"] Nov 24 22:49:21 crc kubenswrapper[4767]: I1124 22:49:21.799262 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw5vn" event={"ID":"062d3f47-a8b3-45b3-9c84-aa6a691e11bd","Type":"ContainerStarted","Data":"b3bf00097b11cfcbc8b98330fdc56a2f15a46e9c1512e21346c14e9817de0462"} Nov 24 22:49:22 crc kubenswrapper[4767]: I1124 22:49:22.813610 4767 generic.go:334] "Generic (PLEG): container finished" podID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerID="654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6" exitCode=0 Nov 24 22:49:22 crc kubenswrapper[4767]: I1124 22:49:22.813682 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw5vn" event={"ID":"062d3f47-a8b3-45b3-9c84-aa6a691e11bd","Type":"ContainerDied","Data":"654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6"} Nov 24 22:49:23 crc kubenswrapper[4767]: I1124 22:49:23.839136 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw5vn" event={"ID":"062d3f47-a8b3-45b3-9c84-aa6a691e11bd","Type":"ContainerStarted","Data":"de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d"} Nov 24 22:49:24 crc kubenswrapper[4767]: I1124 22:49:24.848675 4767 generic.go:334] "Generic (PLEG): container finished" podID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerID="de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d" exitCode=0 Nov 24 22:49:24 crc kubenswrapper[4767]: I1124 22:49:24.848872 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw5vn" event={"ID":"062d3f47-a8b3-45b3-9c84-aa6a691e11bd","Type":"ContainerDied","Data":"de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d"} Nov 24 22:49:25 crc kubenswrapper[4767]: I1124 22:49:25.862246 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw5vn" event={"ID":"062d3f47-a8b3-45b3-9c84-aa6a691e11bd","Type":"ContainerStarted","Data":"c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b"} Nov 24 22:49:25 crc kubenswrapper[4767]: I1124 22:49:25.886645 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qw5vn" podStartSLOduration=3.43816362 podStartE2EDuration="5.886621444s" podCreationTimestamp="2025-11-24 22:49:20 +0000 UTC" firstStartedPulling="2025-11-24 22:49:22.815740527 +0000 UTC m=+4245.732723899" lastFinishedPulling="2025-11-24 22:49:25.264198341 +0000 UTC m=+4248.181181723" observedRunningTime="2025-11-24 22:49:25.881569701 +0000 UTC m=+4248.798553083" watchObservedRunningTime="2025-11-24 22:49:25.886621444 +0000 UTC m=+4248.803604866" Nov 24 22:49:31 crc kubenswrapper[4767]: I1124 22:49:31.146178 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:31 crc kubenswrapper[4767]: I1124 22:49:31.146846 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:31 crc kubenswrapper[4767]: I1124 22:49:31.266808 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:31 crc kubenswrapper[4767]: I1124 22:49:31.995459 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:32 crc kubenswrapper[4767]: I1124 22:49:32.053996 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qw5vn"] Nov 24 22:49:33 crc kubenswrapper[4767]: I1124 22:49:33.940665 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qw5vn" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="registry-server" containerID="cri-o://c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b" gracePeriod=2 Nov 24 22:49:34 crc kubenswrapper[4767]: E1124 22:49:34.101683 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062d3f47_a8b3_45b3_9c84_aa6a691e11bd.slice/crio-conmon-c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062d3f47_a8b3_45b3_9c84_aa6a691e11bd.slice/crio-c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.509865 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.597574 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-catalog-content\") pod \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.597698 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-utilities\") pod \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.597756 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8wb6\" (UniqueName: \"kubernetes.io/projected/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-kube-api-access-d8wb6\") pod \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\" (UID: \"062d3f47-a8b3-45b3-9c84-aa6a691e11bd\") " Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.599677 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-utilities" (OuterVolumeSpecName: "utilities") pod "062d3f47-a8b3-45b3-9c84-aa6a691e11bd" (UID: "062d3f47-a8b3-45b3-9c84-aa6a691e11bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.606544 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-kube-api-access-d8wb6" (OuterVolumeSpecName: "kube-api-access-d8wb6") pod "062d3f47-a8b3-45b3-9c84-aa6a691e11bd" (UID: "062d3f47-a8b3-45b3-9c84-aa6a691e11bd"). InnerVolumeSpecName "kube-api-access-d8wb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.667621 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "062d3f47-a8b3-45b3-9c84-aa6a691e11bd" (UID: "062d3f47-a8b3-45b3-9c84-aa6a691e11bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.700492 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.700536 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.700547 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8wb6\" (UniqueName: \"kubernetes.io/projected/062d3f47-a8b3-45b3-9c84-aa6a691e11bd-kube-api-access-d8wb6\") on node \"crc\" DevicePath \"\"" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.952424 4767 generic.go:334] "Generic (PLEG): container finished" podID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerID="c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b" exitCode=0 Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.952472 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw5vn" event={"ID":"062d3f47-a8b3-45b3-9c84-aa6a691e11bd","Type":"ContainerDied","Data":"c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b"} Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.952482 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw5vn" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.952502 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw5vn" event={"ID":"062d3f47-a8b3-45b3-9c84-aa6a691e11bd","Type":"ContainerDied","Data":"b3bf00097b11cfcbc8b98330fdc56a2f15a46e9c1512e21346c14e9817de0462"} Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.952527 4767 scope.go:117] "RemoveContainer" containerID="c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.986653 4767 scope.go:117] "RemoveContainer" containerID="de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d" Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.988204 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qw5vn"] Nov 24 22:49:34 crc kubenswrapper[4767]: I1124 22:49:34.996573 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qw5vn"] Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.009515 4767 scope.go:117] "RemoveContainer" containerID="654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6" Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.056034 4767 scope.go:117] "RemoveContainer" containerID="c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b" Nov 24 22:49:35 crc kubenswrapper[4767]: E1124 22:49:35.056571 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b\": container with ID starting with c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b not found: ID does not exist" containerID="c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b" Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.056612 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b"} err="failed to get container status \"c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b\": rpc error: code = NotFound desc = could not find container \"c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b\": container with ID starting with c3c35b23bc300fabaa277434a9409b57e0690be349236aef48a76b4e65980b8b not found: ID does not exist" Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.056640 4767 scope.go:117] "RemoveContainer" containerID="de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d" Nov 24 22:49:35 crc kubenswrapper[4767]: E1124 22:49:35.057068 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d\": container with ID starting with de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d not found: ID does not exist" containerID="de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d" Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.059347 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d"} err="failed to get container status \"de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d\": rpc error: code = NotFound desc = could not find container \"de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d\": container with ID starting with de3c19d15b9f08bc092930658601c155ead7538d5d12776997efede1203c695d not found: ID does not exist" Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.059395 4767 scope.go:117] "RemoveContainer" containerID="654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6" Nov 24 22:49:35 crc kubenswrapper[4767]: E1124 22:49:35.065502 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6\": container with ID starting with 654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6 not found: ID does not exist" containerID="654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6" Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.065546 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6"} err="failed to get container status \"654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6\": rpc error: code = NotFound desc = could not find container \"654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6\": container with ID starting with 654fe349a93e718c1cacde7822392fc1cb7f58225340aada36c19ec3677471b6 not found: ID does not exist" Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.481717 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:49:35 crc kubenswrapper[4767]: I1124 22:49:35.482115 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:49:36 crc kubenswrapper[4767]: I1124 22:49:36.341154 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" path="/var/lib/kubelet/pods/062d3f47-a8b3-45b3-9c84-aa6a691e11bd/volumes" Nov 24 22:50:05 crc kubenswrapper[4767]: I1124 22:50:05.482231 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:50:05 crc kubenswrapper[4767]: I1124 22:50:05.483048 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:50:05 crc kubenswrapper[4767]: I1124 22:50:05.483118 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:50:05 crc kubenswrapper[4767]: I1124 22:50:05.484447 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:50:05 crc kubenswrapper[4767]: I1124 22:50:05.484561 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" gracePeriod=600 Nov 24 22:50:05 crc kubenswrapper[4767]: E1124 22:50:05.618071 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:50:06 crc kubenswrapper[4767]: I1124 22:50:06.321035 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" exitCode=0 Nov 24 22:50:06 crc kubenswrapper[4767]: I1124 22:50:06.326552 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e"} Nov 24 22:50:06 crc kubenswrapper[4767]: I1124 22:50:06.326618 4767 scope.go:117] "RemoveContainer" containerID="2e8b085ae95296df8e0f660f5fcb933ab370c6b3e404e71573accde8147048f9" Nov 24 22:50:06 crc kubenswrapper[4767]: I1124 22:50:06.327520 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:50:06 crc kubenswrapper[4767]: E1124 22:50:06.328061 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:50:21 crc kubenswrapper[4767]: I1124 22:50:21.314773 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:50:21 crc kubenswrapper[4767]: E1124 22:50:21.315887 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:50:32 crc kubenswrapper[4767]: I1124 22:50:32.314704 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:50:32 crc kubenswrapper[4767]: E1124 22:50:32.315922 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:50:47 crc kubenswrapper[4767]: I1124 22:50:47.314027 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:50:47 crc kubenswrapper[4767]: E1124 22:50:47.315177 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:51:00 crc kubenswrapper[4767]: I1124 22:51:00.313885 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:51:00 crc kubenswrapper[4767]: E1124 22:51:00.314852 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:51:14 crc kubenswrapper[4767]: I1124 22:51:14.314890 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:51:14 crc kubenswrapper[4767]: E1124 22:51:14.316360 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:51:28 crc kubenswrapper[4767]: I1124 22:51:28.322604 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:51:28 crc kubenswrapper[4767]: E1124 22:51:28.323489 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:51:40 crc kubenswrapper[4767]: I1124 22:51:40.313578 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:51:40 crc kubenswrapper[4767]: E1124 22:51:40.314560 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:51:53 crc kubenswrapper[4767]: I1124 22:51:53.313789 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:51:53 crc kubenswrapper[4767]: E1124 22:51:53.315908 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:52:06 crc kubenswrapper[4767]: I1124 22:52:06.313772 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:52:06 crc kubenswrapper[4767]: E1124 22:52:06.314699 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:52:21 crc kubenswrapper[4767]: I1124 22:52:21.314621 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:52:21 crc kubenswrapper[4767]: E1124 22:52:21.316353 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:52:36 crc kubenswrapper[4767]: I1124 22:52:36.313387 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:52:36 crc kubenswrapper[4767]: E1124 22:52:36.315150 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.224413 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lh67g"] Nov 24 22:52:45 crc kubenswrapper[4767]: E1124 22:52:45.225412 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="registry-server" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.225426 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="registry-server" Nov 24 22:52:45 crc kubenswrapper[4767]: E1124 22:52:45.225438 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="extract-content" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.225445 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="extract-content" Nov 24 22:52:45 crc kubenswrapper[4767]: E1124 22:52:45.225458 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="extract-utilities" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.225465 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="extract-utilities" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.225650 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="062d3f47-a8b3-45b3-9c84-aa6a691e11bd" containerName="registry-server" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.227204 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.240756 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh67g"] Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.326085 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrb7m\" (UniqueName: \"kubernetes.io/projected/632dc288-b4e2-4879-ac73-07e30dc91469-kube-api-access-nrb7m\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.326171 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-utilities\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.326239 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-catalog-content\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.427493 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrb7m\" (UniqueName: \"kubernetes.io/projected/632dc288-b4e2-4879-ac73-07e30dc91469-kube-api-access-nrb7m\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.427848 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-utilities\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.427925 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-catalog-content\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.428396 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-utilities\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.428453 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-catalog-content\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.453595 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrb7m\" (UniqueName: \"kubernetes.io/projected/632dc288-b4e2-4879-ac73-07e30dc91469-kube-api-access-nrb7m\") pod \"redhat-marketplace-lh67g\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:45 crc kubenswrapper[4767]: I1124 22:52:45.561328 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:46 crc kubenswrapper[4767]: I1124 22:52:46.108033 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh67g"] Nov 24 22:52:46 crc kubenswrapper[4767]: I1124 22:52:46.206967 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh67g" event={"ID":"632dc288-b4e2-4879-ac73-07e30dc91469","Type":"ContainerStarted","Data":"a0393eedf15a7d89790e0ebff902926c8a0b787cbc71d68faa775a99a57e4cff"} Nov 24 22:52:47 crc kubenswrapper[4767]: I1124 22:52:47.221450 4767 generic.go:334] "Generic (PLEG): container finished" podID="632dc288-b4e2-4879-ac73-07e30dc91469" containerID="92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41" exitCode=0 Nov 24 22:52:47 crc kubenswrapper[4767]: I1124 22:52:47.221586 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh67g" event={"ID":"632dc288-b4e2-4879-ac73-07e30dc91469","Type":"ContainerDied","Data":"92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41"} Nov 24 22:52:47 crc kubenswrapper[4767]: I1124 22:52:47.225081 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:52:47 crc kubenswrapper[4767]: I1124 22:52:47.314765 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:52:47 crc kubenswrapper[4767]: E1124 22:52:47.315208 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:52:48 crc kubenswrapper[4767]: I1124 22:52:48.237602 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh67g" event={"ID":"632dc288-b4e2-4879-ac73-07e30dc91469","Type":"ContainerStarted","Data":"82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31"} Nov 24 22:52:49 crc kubenswrapper[4767]: I1124 22:52:49.275195 4767 generic.go:334] "Generic (PLEG): container finished" podID="632dc288-b4e2-4879-ac73-07e30dc91469" containerID="82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31" exitCode=0 Nov 24 22:52:49 crc kubenswrapper[4767]: I1124 22:52:49.275245 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh67g" event={"ID":"632dc288-b4e2-4879-ac73-07e30dc91469","Type":"ContainerDied","Data":"82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31"} Nov 24 22:52:50 crc kubenswrapper[4767]: I1124 22:52:50.289604 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh67g" event={"ID":"632dc288-b4e2-4879-ac73-07e30dc91469","Type":"ContainerStarted","Data":"c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682"} Nov 24 22:52:50 crc kubenswrapper[4767]: I1124 22:52:50.314553 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lh67g" podStartSLOduration=2.845147085 podStartE2EDuration="5.314530171s" podCreationTimestamp="2025-11-24 22:52:45 +0000 UTC" firstStartedPulling="2025-11-24 22:52:47.224503525 +0000 UTC m=+4450.141486947" lastFinishedPulling="2025-11-24 22:52:49.693886661 +0000 UTC m=+4452.610870033" observedRunningTime="2025-11-24 22:52:50.311158096 +0000 UTC m=+4453.228141478" watchObservedRunningTime="2025-11-24 22:52:50.314530171 +0000 UTC m=+4453.231513573" Nov 24 22:52:55 crc kubenswrapper[4767]: I1124 22:52:55.562066 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:55 crc kubenswrapper[4767]: I1124 22:52:55.562851 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:55 crc kubenswrapper[4767]: I1124 22:52:55.635965 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:56 crc kubenswrapper[4767]: I1124 22:52:56.444431 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:56 crc kubenswrapper[4767]: I1124 22:52:56.521013 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh67g"] Nov 24 22:52:58 crc kubenswrapper[4767]: I1124 22:52:58.321016 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:52:58 crc kubenswrapper[4767]: E1124 22:52:58.321586 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:52:58 crc kubenswrapper[4767]: I1124 22:52:58.381673 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lh67g" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="registry-server" containerID="cri-o://c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682" gracePeriod=2 Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.117851 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.230021 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-catalog-content\") pod \"632dc288-b4e2-4879-ac73-07e30dc91469\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.230226 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-utilities\") pod \"632dc288-b4e2-4879-ac73-07e30dc91469\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.230258 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrb7m\" (UniqueName: \"kubernetes.io/projected/632dc288-b4e2-4879-ac73-07e30dc91469-kube-api-access-nrb7m\") pod \"632dc288-b4e2-4879-ac73-07e30dc91469\" (UID: \"632dc288-b4e2-4879-ac73-07e30dc91469\") " Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.231552 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-utilities" (OuterVolumeSpecName: "utilities") pod "632dc288-b4e2-4879-ac73-07e30dc91469" (UID: "632dc288-b4e2-4879-ac73-07e30dc91469"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.238068 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632dc288-b4e2-4879-ac73-07e30dc91469-kube-api-access-nrb7m" (OuterVolumeSpecName: "kube-api-access-nrb7m") pod "632dc288-b4e2-4879-ac73-07e30dc91469" (UID: "632dc288-b4e2-4879-ac73-07e30dc91469"). InnerVolumeSpecName "kube-api-access-nrb7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.251064 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "632dc288-b4e2-4879-ac73-07e30dc91469" (UID: "632dc288-b4e2-4879-ac73-07e30dc91469"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.333133 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.334087 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/632dc288-b4e2-4879-ac73-07e30dc91469-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.334142 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrb7m\" (UniqueName: \"kubernetes.io/projected/632dc288-b4e2-4879-ac73-07e30dc91469-kube-api-access-nrb7m\") on node \"crc\" DevicePath \"\"" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.395882 4767 generic.go:334] "Generic (PLEG): container finished" podID="632dc288-b4e2-4879-ac73-07e30dc91469" containerID="c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682" exitCode=0 Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.395977 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh67g" event={"ID":"632dc288-b4e2-4879-ac73-07e30dc91469","Type":"ContainerDied","Data":"c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682"} Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.396264 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lh67g" event={"ID":"632dc288-b4e2-4879-ac73-07e30dc91469","Type":"ContainerDied","Data":"a0393eedf15a7d89790e0ebff902926c8a0b787cbc71d68faa775a99a57e4cff"} Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.396047 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lh67g" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.396314 4767 scope.go:117] "RemoveContainer" containerID="c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.430686 4767 scope.go:117] "RemoveContainer" containerID="82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.447971 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh67g"] Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.460365 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lh67g"] Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.471185 4767 scope.go:117] "RemoveContainer" containerID="92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.536332 4767 scope.go:117] "RemoveContainer" containerID="c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682" Nov 24 22:52:59 crc kubenswrapper[4767]: E1124 22:52:59.537009 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682\": container with ID starting with c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682 not found: ID does not exist" containerID="c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.537073 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682"} err="failed to get container status \"c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682\": rpc error: code = NotFound desc = could not find container \"c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682\": container with ID starting with c88c5e61669f04b7a7bc1511f8ad83b0984641ce3064f02f267c79067dff0682 not found: ID does not exist" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.537107 4767 scope.go:117] "RemoveContainer" containerID="82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31" Nov 24 22:52:59 crc kubenswrapper[4767]: E1124 22:52:59.537537 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31\": container with ID starting with 82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31 not found: ID does not exist" containerID="82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.537596 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31"} err="failed to get container status \"82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31\": rpc error: code = NotFound desc = could not find container \"82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31\": container with ID starting with 82007dc2c0edfb40e49556e25ef14d810eb711e0918a09afba9772ebf9fe6e31 not found: ID does not exist" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.537630 4767 scope.go:117] "RemoveContainer" containerID="92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41" Nov 24 22:52:59 crc kubenswrapper[4767]: E1124 22:52:59.537947 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41\": container with ID starting with 92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41 not found: ID does not exist" containerID="92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41" Nov 24 22:52:59 crc kubenswrapper[4767]: I1124 22:52:59.537985 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41"} err="failed to get container status \"92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41\": rpc error: code = NotFound desc = could not find container \"92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41\": container with ID starting with 92cd590cba3ba68de377f8de86fda55e638bcb4490a2aecd2a6aa1c18f691b41 not found: ID does not exist" Nov 24 22:53:00 crc kubenswrapper[4767]: I1124 22:53:00.330855 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" path="/var/lib/kubelet/pods/632dc288-b4e2-4879-ac73-07e30dc91469/volumes" Nov 24 22:53:09 crc kubenswrapper[4767]: I1124 22:53:09.313583 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:53:09 crc kubenswrapper[4767]: E1124 22:53:09.314660 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:53:24 crc kubenswrapper[4767]: I1124 22:53:24.314155 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:53:24 crc kubenswrapper[4767]: E1124 22:53:24.315189 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:53:35 crc kubenswrapper[4767]: I1124 22:53:35.313603 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:53:35 crc kubenswrapper[4767]: E1124 22:53:35.314506 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:53:47 crc kubenswrapper[4767]: I1124 22:53:47.314403 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:53:47 crc kubenswrapper[4767]: E1124 22:53:47.315764 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:54:00 crc kubenswrapper[4767]: I1124 22:54:00.313690 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:54:00 crc kubenswrapper[4767]: E1124 22:54:00.314793 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:54:11 crc kubenswrapper[4767]: I1124 22:54:11.314347 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:54:11 crc kubenswrapper[4767]: E1124 22:54:11.315358 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:54:26 crc kubenswrapper[4767]: I1124 22:54:26.314331 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:54:26 crc kubenswrapper[4767]: E1124 22:54:26.316043 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:54:37 crc kubenswrapper[4767]: I1124 22:54:37.313814 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:54:37 crc kubenswrapper[4767]: E1124 22:54:37.315450 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:54:48 crc kubenswrapper[4767]: I1124 22:54:48.322026 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:54:48 crc kubenswrapper[4767]: E1124 22:54:48.323057 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:55:00 crc kubenswrapper[4767]: I1124 22:55:00.314953 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:55:00 crc kubenswrapper[4767]: E1124 22:55:00.315968 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 22:55:15 crc kubenswrapper[4767]: I1124 22:55:15.314182 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 22:55:16 crc kubenswrapper[4767]: I1124 22:55:16.015112 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"65dd25949d2692848339b7c7f03d3a6b02a7879e37418fae805271d6028ce665"} Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.680453 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h8nzv"] Nov 24 22:55:41 crc kubenswrapper[4767]: E1124 22:55:41.682119 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="extract-content" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.682145 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="extract-content" Nov 24 22:55:41 crc kubenswrapper[4767]: E1124 22:55:41.682175 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="extract-utilities" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.682189 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="extract-utilities" Nov 24 22:55:41 crc kubenswrapper[4767]: E1124 22:55:41.682213 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="registry-server" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.682240 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="registry-server" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.683784 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc288-b4e2-4879-ac73-07e30dc91469" containerName="registry-server" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.686535 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.694943 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8nzv"] Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.803598 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-utilities\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.803661 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t8sb\" (UniqueName: \"kubernetes.io/projected/f114dd55-f2e6-4764-a1ad-4a6c0b946795-kube-api-access-6t8sb\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.803722 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-catalog-content\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.906922 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-catalog-content\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.907135 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-utilities\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.907172 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t8sb\" (UniqueName: \"kubernetes.io/projected/f114dd55-f2e6-4764-a1ad-4a6c0b946795-kube-api-access-6t8sb\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.907502 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-catalog-content\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.907593 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-utilities\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:41 crc kubenswrapper[4767]: I1124 22:55:41.928993 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t8sb\" (UniqueName: \"kubernetes.io/projected/f114dd55-f2e6-4764-a1ad-4a6c0b946795-kube-api-access-6t8sb\") pod \"certified-operators-h8nzv\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:42 crc kubenswrapper[4767]: I1124 22:55:42.013963 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:42 crc kubenswrapper[4767]: I1124 22:55:42.510789 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8nzv"] Nov 24 22:55:42 crc kubenswrapper[4767]: W1124 22:55:42.518839 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf114dd55_f2e6_4764_a1ad_4a6c0b946795.slice/crio-3edcc70fed657316092f58eb684a89a303aafe01a8008c21824fb32f00a776a2 WatchSource:0}: Error finding container 3edcc70fed657316092f58eb684a89a303aafe01a8008c21824fb32f00a776a2: Status 404 returned error can't find the container with id 3edcc70fed657316092f58eb684a89a303aafe01a8008c21824fb32f00a776a2 Nov 24 22:55:43 crc kubenswrapper[4767]: I1124 22:55:43.302306 4767 generic.go:334] "Generic (PLEG): container finished" podID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerID="9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b" exitCode=0 Nov 24 22:55:43 crc kubenswrapper[4767]: I1124 22:55:43.302435 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8nzv" event={"ID":"f114dd55-f2e6-4764-a1ad-4a6c0b946795","Type":"ContainerDied","Data":"9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b"} Nov 24 22:55:43 crc kubenswrapper[4767]: I1124 22:55:43.302749 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8nzv" event={"ID":"f114dd55-f2e6-4764-a1ad-4a6c0b946795","Type":"ContainerStarted","Data":"3edcc70fed657316092f58eb684a89a303aafe01a8008c21824fb32f00a776a2"} Nov 24 22:55:44 crc kubenswrapper[4767]: I1124 22:55:44.325027 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8nzv" event={"ID":"f114dd55-f2e6-4764-a1ad-4a6c0b946795","Type":"ContainerStarted","Data":"e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53"} Nov 24 22:55:45 crc kubenswrapper[4767]: I1124 22:55:45.329637 4767 generic.go:334] "Generic (PLEG): container finished" podID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerID="e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53" exitCode=0 Nov 24 22:55:45 crc kubenswrapper[4767]: I1124 22:55:45.329695 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8nzv" event={"ID":"f114dd55-f2e6-4764-a1ad-4a6c0b946795","Type":"ContainerDied","Data":"e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53"} Nov 24 22:55:47 crc kubenswrapper[4767]: I1124 22:55:47.354669 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8nzv" event={"ID":"f114dd55-f2e6-4764-a1ad-4a6c0b946795","Type":"ContainerStarted","Data":"776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5"} Nov 24 22:55:47 crc kubenswrapper[4767]: I1124 22:55:47.384047 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h8nzv" podStartSLOduration=3.654421968 podStartE2EDuration="6.384018168s" podCreationTimestamp="2025-11-24 22:55:41 +0000 UTC" firstStartedPulling="2025-11-24 22:55:43.304135195 +0000 UTC m=+4626.221118587" lastFinishedPulling="2025-11-24 22:55:46.033731405 +0000 UTC m=+4628.950714787" observedRunningTime="2025-11-24 22:55:47.374382556 +0000 UTC m=+4630.291365938" watchObservedRunningTime="2025-11-24 22:55:47.384018168 +0000 UTC m=+4630.301001550" Nov 24 22:55:52 crc kubenswrapper[4767]: I1124 22:55:52.014138 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:52 crc kubenswrapper[4767]: I1124 22:55:52.014802 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:52 crc kubenswrapper[4767]: I1124 22:55:52.088508 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:52 crc kubenswrapper[4767]: I1124 22:55:52.480571 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:52 crc kubenswrapper[4767]: I1124 22:55:52.556031 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8nzv"] Nov 24 22:55:54 crc kubenswrapper[4767]: I1124 22:55:54.439737 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h8nzv" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="registry-server" containerID="cri-o://776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5" gracePeriod=2 Nov 24 22:55:54 crc kubenswrapper[4767]: E1124 22:55:54.721559 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf114dd55_f2e6_4764_a1ad_4a6c0b946795.slice/crio-776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf114dd55_f2e6_4764_a1ad_4a6c0b946795.slice/crio-conmon-776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:55:54 crc kubenswrapper[4767]: I1124 22:55:54.969867 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.006995 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-catalog-content\") pod \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.007154 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t8sb\" (UniqueName: \"kubernetes.io/projected/f114dd55-f2e6-4764-a1ad-4a6c0b946795-kube-api-access-6t8sb\") pod \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.007211 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-utilities\") pod \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\" (UID: \"f114dd55-f2e6-4764-a1ad-4a6c0b946795\") " Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.008306 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-utilities" (OuterVolumeSpecName: "utilities") pod "f114dd55-f2e6-4764-a1ad-4a6c0b946795" (UID: "f114dd55-f2e6-4764-a1ad-4a6c0b946795"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.016785 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f114dd55-f2e6-4764-a1ad-4a6c0b946795-kube-api-access-6t8sb" (OuterVolumeSpecName: "kube-api-access-6t8sb") pod "f114dd55-f2e6-4764-a1ad-4a6c0b946795" (UID: "f114dd55-f2e6-4764-a1ad-4a6c0b946795"). InnerVolumeSpecName "kube-api-access-6t8sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.059911 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f114dd55-f2e6-4764-a1ad-4a6c0b946795" (UID: "f114dd55-f2e6-4764-a1ad-4a6c0b946795"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.110112 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.110146 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t8sb\" (UniqueName: \"kubernetes.io/projected/f114dd55-f2e6-4764-a1ad-4a6c0b946795-kube-api-access-6t8sb\") on node \"crc\" DevicePath \"\"" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.110158 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f114dd55-f2e6-4764-a1ad-4a6c0b946795-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.456252 4767 generic.go:334] "Generic (PLEG): container finished" podID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerID="776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5" exitCode=0 Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.456359 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8nzv" event={"ID":"f114dd55-f2e6-4764-a1ad-4a6c0b946795","Type":"ContainerDied","Data":"776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5"} Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.456822 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8nzv" event={"ID":"f114dd55-f2e6-4764-a1ad-4a6c0b946795","Type":"ContainerDied","Data":"3edcc70fed657316092f58eb684a89a303aafe01a8008c21824fb32f00a776a2"} Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.456872 4767 scope.go:117] "RemoveContainer" containerID="776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.456401 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8nzv" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.496977 4767 scope.go:117] "RemoveContainer" containerID="e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.526909 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8nzv"] Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.542400 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h8nzv"] Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.551583 4767 scope.go:117] "RemoveContainer" containerID="9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.596602 4767 scope.go:117] "RemoveContainer" containerID="776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5" Nov 24 22:55:55 crc kubenswrapper[4767]: E1124 22:55:55.597738 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5\": container with ID starting with 776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5 not found: ID does not exist" containerID="776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.597828 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5"} err="failed to get container status \"776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5\": rpc error: code = NotFound desc = could not find container \"776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5\": container with ID starting with 776fcf8f6042e74bc1f4c2c9819b402e419585e31087db0bd343e2b7222737a5 not found: ID does not exist" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.597885 4767 scope.go:117] "RemoveContainer" containerID="e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53" Nov 24 22:55:55 crc kubenswrapper[4767]: E1124 22:55:55.598400 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53\": container with ID starting with e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53 not found: ID does not exist" containerID="e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.598441 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53"} err="failed to get container status \"e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53\": rpc error: code = NotFound desc = could not find container \"e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53\": container with ID starting with e9f8a7595e19d2f06f3af7665461e948abe1b4d5f661a6d430b0f5e77f606f53 not found: ID does not exist" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.598471 4767 scope.go:117] "RemoveContainer" containerID="9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b" Nov 24 22:55:55 crc kubenswrapper[4767]: E1124 22:55:55.598970 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b\": container with ID starting with 9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b not found: ID does not exist" containerID="9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b" Nov 24 22:55:55 crc kubenswrapper[4767]: I1124 22:55:55.599023 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b"} err="failed to get container status \"9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b\": rpc error: code = NotFound desc = could not find container \"9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b\": container with ID starting with 9ed2d6b0e91a5ccdcf27319029d4a113686efc74999244c8fc2382f67b42c19b not found: ID does not exist" Nov 24 22:55:56 crc kubenswrapper[4767]: I1124 22:55:56.333703 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" path="/var/lib/kubelet/pods/f114dd55-f2e6-4764-a1ad-4a6c0b946795/volumes" Nov 24 22:57:35 crc kubenswrapper[4767]: I1124 22:57:35.481562 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:57:35 crc kubenswrapper[4767]: I1124 22:57:35.482130 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:58:05 crc kubenswrapper[4767]: I1124 22:58:05.482137 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:58:05 crc kubenswrapper[4767]: I1124 22:58:05.482801 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:58:35 crc kubenswrapper[4767]: I1124 22:58:35.481874 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:58:35 crc kubenswrapper[4767]: I1124 22:58:35.482703 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:58:35 crc kubenswrapper[4767]: I1124 22:58:35.482772 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 22:58:35 crc kubenswrapper[4767]: I1124 22:58:35.483929 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65dd25949d2692848339b7c7f03d3a6b02a7879e37418fae805271d6028ce665"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:58:35 crc kubenswrapper[4767]: I1124 22:58:35.484060 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://65dd25949d2692848339b7c7f03d3a6b02a7879e37418fae805271d6028ce665" gracePeriod=600 Nov 24 22:58:36 crc kubenswrapper[4767]: I1124 22:58:36.014110 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="65dd25949d2692848339b7c7f03d3a6b02a7879e37418fae805271d6028ce665" exitCode=0 Nov 24 22:58:36 crc kubenswrapper[4767]: I1124 22:58:36.014197 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"65dd25949d2692848339b7c7f03d3a6b02a7879e37418fae805271d6028ce665"} Nov 24 22:58:36 crc kubenswrapper[4767]: I1124 22:58:36.014811 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba"} Nov 24 22:58:36 crc kubenswrapper[4767]: I1124 22:58:36.014840 4767 scope.go:117] "RemoveContainer" containerID="1071c857414f8c475254d77ca0e14bc322132422b016406801d23f92e60d666e" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.162217 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5"] Nov 24 23:00:00 crc kubenswrapper[4767]: E1124 23:00:00.163158 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="registry-server" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.163236 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="registry-server" Nov 24 23:00:00 crc kubenswrapper[4767]: E1124 23:00:00.163256 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="extract-content" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.163262 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="extract-content" Nov 24 23:00:00 crc kubenswrapper[4767]: E1124 23:00:00.163309 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="extract-utilities" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.163319 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="extract-utilities" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.163515 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f114dd55-f2e6-4764-a1ad-4a6c0b946795" containerName="registry-server" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.164314 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.169868 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.170885 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.198108 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5"] Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.278682 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spdw5\" (UniqueName: \"kubernetes.io/projected/4be755d1-1436-4399-80f2-3623c495dc85-kube-api-access-spdw5\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.278892 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4be755d1-1436-4399-80f2-3623c495dc85-secret-volume\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.278924 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4be755d1-1436-4399-80f2-3623c495dc85-config-volume\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.380730 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4be755d1-1436-4399-80f2-3623c495dc85-secret-volume\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.380814 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4be755d1-1436-4399-80f2-3623c495dc85-config-volume\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.381047 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spdw5\" (UniqueName: \"kubernetes.io/projected/4be755d1-1436-4399-80f2-3623c495dc85-kube-api-access-spdw5\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.382212 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4be755d1-1436-4399-80f2-3623c495dc85-config-volume\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.391935 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4be755d1-1436-4399-80f2-3623c495dc85-secret-volume\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.400403 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spdw5\" (UniqueName: \"kubernetes.io/projected/4be755d1-1436-4399-80f2-3623c495dc85-kube-api-access-spdw5\") pod \"collect-profiles-29400420-8p5n5\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.495920 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:00 crc kubenswrapper[4767]: I1124 23:00:00.983085 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5"] Nov 24 23:00:01 crc kubenswrapper[4767]: I1124 23:00:01.974383 4767 generic.go:334] "Generic (PLEG): container finished" podID="4be755d1-1436-4399-80f2-3623c495dc85" containerID="f347b1d0d9f714db6dd25eefabf148272a4dca5f9d6ec0d2c01f295dd5bc8a66" exitCode=0 Nov 24 23:00:01 crc kubenswrapper[4767]: I1124 23:00:01.974492 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" event={"ID":"4be755d1-1436-4399-80f2-3623c495dc85","Type":"ContainerDied","Data":"f347b1d0d9f714db6dd25eefabf148272a4dca5f9d6ec0d2c01f295dd5bc8a66"} Nov 24 23:00:01 crc kubenswrapper[4767]: I1124 23:00:01.974861 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" event={"ID":"4be755d1-1436-4399-80f2-3623c495dc85","Type":"ContainerStarted","Data":"03dcf664f3bfd689404477029ece67eecc855db25446d505dd61a3fb0b7aa160"} Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.433973 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.557217 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4be755d1-1436-4399-80f2-3623c495dc85-secret-volume\") pod \"4be755d1-1436-4399-80f2-3623c495dc85\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.557337 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spdw5\" (UniqueName: \"kubernetes.io/projected/4be755d1-1436-4399-80f2-3623c495dc85-kube-api-access-spdw5\") pod \"4be755d1-1436-4399-80f2-3623c495dc85\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.557374 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4be755d1-1436-4399-80f2-3623c495dc85-config-volume\") pod \"4be755d1-1436-4399-80f2-3623c495dc85\" (UID: \"4be755d1-1436-4399-80f2-3623c495dc85\") " Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.558401 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be755d1-1436-4399-80f2-3623c495dc85-config-volume" (OuterVolumeSpecName: "config-volume") pod "4be755d1-1436-4399-80f2-3623c495dc85" (UID: "4be755d1-1436-4399-80f2-3623c495dc85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.563705 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be755d1-1436-4399-80f2-3623c495dc85-kube-api-access-spdw5" (OuterVolumeSpecName: "kube-api-access-spdw5") pod "4be755d1-1436-4399-80f2-3623c495dc85" (UID: "4be755d1-1436-4399-80f2-3623c495dc85"). InnerVolumeSpecName "kube-api-access-spdw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.563790 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be755d1-1436-4399-80f2-3623c495dc85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4be755d1-1436-4399-80f2-3623c495dc85" (UID: "4be755d1-1436-4399-80f2-3623c495dc85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.660174 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4be755d1-1436-4399-80f2-3623c495dc85-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.660222 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spdw5\" (UniqueName: \"kubernetes.io/projected/4be755d1-1436-4399-80f2-3623c495dc85-kube-api-access-spdw5\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:03 crc kubenswrapper[4767]: I1124 23:00:03.660238 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4be755d1-1436-4399-80f2-3623c495dc85-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:04 crc kubenswrapper[4767]: I1124 23:00:04.000154 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" event={"ID":"4be755d1-1436-4399-80f2-3623c495dc85","Type":"ContainerDied","Data":"03dcf664f3bfd689404477029ece67eecc855db25446d505dd61a3fb0b7aa160"} Nov 24 23:00:04 crc kubenswrapper[4767]: I1124 23:00:04.000525 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03dcf664f3bfd689404477029ece67eecc855db25446d505dd61a3fb0b7aa160" Nov 24 23:00:04 crc kubenswrapper[4767]: I1124 23:00:04.000223 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-8p5n5" Nov 24 23:00:04 crc kubenswrapper[4767]: I1124 23:00:04.524455 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs"] Nov 24 23:00:04 crc kubenswrapper[4767]: I1124 23:00:04.534373 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-vhcxs"] Nov 24 23:00:06 crc kubenswrapper[4767]: I1124 23:00:06.333349 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555bcb9b-a2cc-4c32-9655-b14a430346cf" path="/var/lib/kubelet/pods/555bcb9b-a2cc-4c32-9655-b14a430346cf/volumes" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.109110 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7gj5"] Nov 24 23:00:11 crc kubenswrapper[4767]: E1124 23:00:11.111092 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be755d1-1436-4399-80f2-3623c495dc85" containerName="collect-profiles" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.111180 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be755d1-1436-4399-80f2-3623c495dc85" containerName="collect-profiles" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.111487 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be755d1-1436-4399-80f2-3623c495dc85" containerName="collect-profiles" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.113415 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.126884 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7gj5"] Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.225681 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-utilities\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.226011 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8v4g\" (UniqueName: \"kubernetes.io/projected/5524b940-040f-49d4-a179-47cf93764cdc-kube-api-access-g8v4g\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.226171 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-catalog-content\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.327962 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-catalog-content\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.328072 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-utilities\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.328098 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8v4g\" (UniqueName: \"kubernetes.io/projected/5524b940-040f-49d4-a179-47cf93764cdc-kube-api-access-g8v4g\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.328585 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-catalog-content\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.328736 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-utilities\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.350673 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8v4g\" (UniqueName: \"kubernetes.io/projected/5524b940-040f-49d4-a179-47cf93764cdc-kube-api-access-g8v4g\") pod \"community-operators-h7gj5\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.437172 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:11 crc kubenswrapper[4767]: I1124 23:00:11.975656 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7gj5"] Nov 24 23:00:11 crc kubenswrapper[4767]: W1124 23:00:11.981351 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5524b940_040f_49d4_a179_47cf93764cdc.slice/crio-50ed481fdaa968f909373435bd00bb654732b9dd39f316f560e6f8f0f5468bfd WatchSource:0}: Error finding container 50ed481fdaa968f909373435bd00bb654732b9dd39f316f560e6f8f0f5468bfd: Status 404 returned error can't find the container with id 50ed481fdaa968f909373435bd00bb654732b9dd39f316f560e6f8f0f5468bfd Nov 24 23:00:12 crc kubenswrapper[4767]: I1124 23:00:12.100769 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7gj5" event={"ID":"5524b940-040f-49d4-a179-47cf93764cdc","Type":"ContainerStarted","Data":"50ed481fdaa968f909373435bd00bb654732b9dd39f316f560e6f8f0f5468bfd"} Nov 24 23:00:13 crc kubenswrapper[4767]: I1124 23:00:13.113910 4767 generic.go:334] "Generic (PLEG): container finished" podID="5524b940-040f-49d4-a179-47cf93764cdc" containerID="11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76" exitCode=0 Nov 24 23:00:13 crc kubenswrapper[4767]: I1124 23:00:13.114033 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7gj5" event={"ID":"5524b940-040f-49d4-a179-47cf93764cdc","Type":"ContainerDied","Data":"11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76"} Nov 24 23:00:13 crc kubenswrapper[4767]: I1124 23:00:13.116120 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:00:15 crc kubenswrapper[4767]: I1124 23:00:15.138738 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7gj5" event={"ID":"5524b940-040f-49d4-a179-47cf93764cdc","Type":"ContainerStarted","Data":"9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536"} Nov 24 23:00:16 crc kubenswrapper[4767]: I1124 23:00:16.148797 4767 generic.go:334] "Generic (PLEG): container finished" podID="5524b940-040f-49d4-a179-47cf93764cdc" containerID="9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536" exitCode=0 Nov 24 23:00:16 crc kubenswrapper[4767]: I1124 23:00:16.148857 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7gj5" event={"ID":"5524b940-040f-49d4-a179-47cf93764cdc","Type":"ContainerDied","Data":"9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536"} Nov 24 23:00:17 crc kubenswrapper[4767]: I1124 23:00:17.160983 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7gj5" event={"ID":"5524b940-040f-49d4-a179-47cf93764cdc","Type":"ContainerStarted","Data":"b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0"} Nov 24 23:00:17 crc kubenswrapper[4767]: I1124 23:00:17.186827 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7gj5" podStartSLOduration=2.720363135 podStartE2EDuration="6.186809031s" podCreationTimestamp="2025-11-24 23:00:11 +0000 UTC" firstStartedPulling="2025-11-24 23:00:13.115774179 +0000 UTC m=+4896.032757561" lastFinishedPulling="2025-11-24 23:00:16.582220085 +0000 UTC m=+4899.499203457" observedRunningTime="2025-11-24 23:00:17.179814313 +0000 UTC m=+4900.096797685" watchObservedRunningTime="2025-11-24 23:00:17.186809031 +0000 UTC m=+4900.103792403" Nov 24 23:00:21 crc kubenswrapper[4767]: I1124 23:00:21.438068 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:21 crc kubenswrapper[4767]: I1124 23:00:21.439019 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:21 crc kubenswrapper[4767]: I1124 23:00:21.516182 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:22 crc kubenswrapper[4767]: I1124 23:00:22.301817 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:22 crc kubenswrapper[4767]: I1124 23:00:22.373770 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7gj5"] Nov 24 23:00:24 crc kubenswrapper[4767]: I1124 23:00:24.252598 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7gj5" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="registry-server" containerID="cri-o://b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0" gracePeriod=2 Nov 24 23:00:24 crc kubenswrapper[4767]: I1124 23:00:24.861330 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.029026 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-utilities\") pod \"5524b940-040f-49d4-a179-47cf93764cdc\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.029416 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-catalog-content\") pod \"5524b940-040f-49d4-a179-47cf93764cdc\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.029497 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8v4g\" (UniqueName: \"kubernetes.io/projected/5524b940-040f-49d4-a179-47cf93764cdc-kube-api-access-g8v4g\") pod \"5524b940-040f-49d4-a179-47cf93764cdc\" (UID: \"5524b940-040f-49d4-a179-47cf93764cdc\") " Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.031573 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-utilities" (OuterVolumeSpecName: "utilities") pod "5524b940-040f-49d4-a179-47cf93764cdc" (UID: "5524b940-040f-49d4-a179-47cf93764cdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.040201 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5524b940-040f-49d4-a179-47cf93764cdc-kube-api-access-g8v4g" (OuterVolumeSpecName: "kube-api-access-g8v4g") pod "5524b940-040f-49d4-a179-47cf93764cdc" (UID: "5524b940-040f-49d4-a179-47cf93764cdc"). InnerVolumeSpecName "kube-api-access-g8v4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.114731 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5524b940-040f-49d4-a179-47cf93764cdc" (UID: "5524b940-040f-49d4-a179-47cf93764cdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.132347 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.132391 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524b940-040f-49d4-a179-47cf93764cdc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.132404 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8v4g\" (UniqueName: \"kubernetes.io/projected/5524b940-040f-49d4-a179-47cf93764cdc-kube-api-access-g8v4g\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.267259 4767 generic.go:334] "Generic (PLEG): container finished" podID="5524b940-040f-49d4-a179-47cf93764cdc" containerID="b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0" exitCode=0 Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.267349 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7gj5" event={"ID":"5524b940-040f-49d4-a179-47cf93764cdc","Type":"ContainerDied","Data":"b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0"} Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.267394 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7gj5" event={"ID":"5524b940-040f-49d4-a179-47cf93764cdc","Type":"ContainerDied","Data":"50ed481fdaa968f909373435bd00bb654732b9dd39f316f560e6f8f0f5468bfd"} Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.267417 4767 scope.go:117] "RemoveContainer" containerID="b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.267494 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7gj5" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.309659 4767 scope.go:117] "RemoveContainer" containerID="9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.315103 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7gj5"] Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.328356 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7gj5"] Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.344133 4767 scope.go:117] "RemoveContainer" containerID="11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.412821 4767 scope.go:117] "RemoveContainer" containerID="b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0" Nov 24 23:00:25 crc kubenswrapper[4767]: E1124 23:00:25.413242 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0\": container with ID starting with b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0 not found: ID does not exist" containerID="b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.413438 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0"} err="failed to get container status \"b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0\": rpc error: code = NotFound desc = could not find container \"b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0\": container with ID starting with b7b1fc053bfa50a351031bee0e24f213b3a32d62082d689df68e74ba0ca1e7e0 not found: ID does not exist" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.413567 4767 scope.go:117] "RemoveContainer" containerID="9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536" Nov 24 23:00:25 crc kubenswrapper[4767]: E1124 23:00:25.414019 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536\": container with ID starting with 9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536 not found: ID does not exist" containerID="9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.414073 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536"} err="failed to get container status \"9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536\": rpc error: code = NotFound desc = could not find container \"9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536\": container with ID starting with 9921cce2963ea065723fa3f2a5d57874445cbef66fb151236ef7b3862b691536 not found: ID does not exist" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.414107 4767 scope.go:117] "RemoveContainer" containerID="11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76" Nov 24 23:00:25 crc kubenswrapper[4767]: E1124 23:00:25.414672 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76\": container with ID starting with 11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76 not found: ID does not exist" containerID="11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76" Nov 24 23:00:25 crc kubenswrapper[4767]: I1124 23:00:25.414712 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76"} err="failed to get container status \"11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76\": rpc error: code = NotFound desc = could not find container \"11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76\": container with ID starting with 11761747e9eaf59188e1c533d513af59866bb102a1cc6bc83cd607c156a82b76 not found: ID does not exist" Nov 24 23:00:26 crc kubenswrapper[4767]: I1124 23:00:26.334109 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5524b940-040f-49d4-a179-47cf93764cdc" path="/var/lib/kubelet/pods/5524b940-040f-49d4-a179-47cf93764cdc/volumes" Nov 24 23:00:35 crc kubenswrapper[4767]: I1124 23:00:35.481550 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:00:35 crc kubenswrapper[4767]: I1124 23:00:35.482139 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:00:44 crc kubenswrapper[4767]: I1124 23:00:44.931232 4767 scope.go:117] "RemoveContainer" containerID="accb47d64b5665f3970f5f6a8b07656660b73631c553e67949bac000c7946fe2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.162945 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400421-q74h2"] Nov 24 23:01:00 crc kubenswrapper[4767]: E1124 23:01:00.163816 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="extract-utilities" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.163830 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="extract-utilities" Nov 24 23:01:00 crc kubenswrapper[4767]: E1124 23:01:00.163853 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="registry-server" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.163860 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="registry-server" Nov 24 23:01:00 crc kubenswrapper[4767]: E1124 23:01:00.163871 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="extract-content" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.163877 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="extract-content" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.164081 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5524b940-040f-49d4-a179-47cf93764cdc" containerName="registry-server" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.164874 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.198298 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400421-q74h2"] Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.295021 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6s7w\" (UniqueName: \"kubernetes.io/projected/98fc4d42-16a8-4051-afa0-e47332ee72bf-kube-api-access-j6s7w\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.295077 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-fernet-keys\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.295239 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-config-data\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.295617 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-combined-ca-bundle\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.397853 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-combined-ca-bundle\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.398240 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6s7w\" (UniqueName: \"kubernetes.io/projected/98fc4d42-16a8-4051-afa0-e47332ee72bf-kube-api-access-j6s7w\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.398412 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-fernet-keys\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.398582 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-config-data\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.405977 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-fernet-keys\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.407847 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-combined-ca-bundle\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.409097 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-config-data\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.425700 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6s7w\" (UniqueName: \"kubernetes.io/projected/98fc4d42-16a8-4051-afa0-e47332ee72bf-kube-api-access-j6s7w\") pod \"keystone-cron-29400421-q74h2\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.497941 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:00 crc kubenswrapper[4767]: I1124 23:01:00.940869 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400421-q74h2"] Nov 24 23:01:00 crc kubenswrapper[4767]: W1124 23:01:00.951461 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98fc4d42_16a8_4051_afa0_e47332ee72bf.slice/crio-1cac38eb003e8c5a32da0cb309261ca4493b2b306f64de6645f07af6bb2ee8d8 WatchSource:0}: Error finding container 1cac38eb003e8c5a32da0cb309261ca4493b2b306f64de6645f07af6bb2ee8d8: Status 404 returned error can't find the container with id 1cac38eb003e8c5a32da0cb309261ca4493b2b306f64de6645f07af6bb2ee8d8 Nov 24 23:01:01 crc kubenswrapper[4767]: I1124 23:01:01.673731 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-q74h2" event={"ID":"98fc4d42-16a8-4051-afa0-e47332ee72bf","Type":"ContainerStarted","Data":"ed7ae0a6d029a0c727fde55665f3f5a7dde60974cbf730c2f0d5530517df4719"} Nov 24 23:01:01 crc kubenswrapper[4767]: I1124 23:01:01.674407 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-q74h2" event={"ID":"98fc4d42-16a8-4051-afa0-e47332ee72bf","Type":"ContainerStarted","Data":"1cac38eb003e8c5a32da0cb309261ca4493b2b306f64de6645f07af6bb2ee8d8"} Nov 24 23:01:01 crc kubenswrapper[4767]: I1124 23:01:01.694880 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400421-q74h2" podStartSLOduration=1.694852797 podStartE2EDuration="1.694852797s" podCreationTimestamp="2025-11-24 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 23:01:01.693014975 +0000 UTC m=+4944.609998377" watchObservedRunningTime="2025-11-24 23:01:01.694852797 +0000 UTC m=+4944.611836169" Nov 24 23:01:03 crc kubenswrapper[4767]: I1124 23:01:03.702874 4767 generic.go:334] "Generic (PLEG): container finished" podID="98fc4d42-16a8-4051-afa0-e47332ee72bf" containerID="ed7ae0a6d029a0c727fde55665f3f5a7dde60974cbf730c2f0d5530517df4719" exitCode=0 Nov 24 23:01:03 crc kubenswrapper[4767]: I1124 23:01:03.702980 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-q74h2" event={"ID":"98fc4d42-16a8-4051-afa0-e47332ee72bf","Type":"ContainerDied","Data":"ed7ae0a6d029a0c727fde55665f3f5a7dde60974cbf730c2f0d5530517df4719"} Nov 24 23:01:03 crc kubenswrapper[4767]: E1124 23:01:03.798691 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98fc4d42_16a8_4051_afa0_e47332ee72bf.slice/crio-conmon-ed7ae0a6d029a0c727fde55665f3f5a7dde60974cbf730c2f0d5530517df4719.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98fc4d42_16a8_4051_afa0_e47332ee72bf.slice/crio-ed7ae0a6d029a0c727fde55665f3f5a7dde60974cbf730c2f0d5530517df4719.scope\": RecentStats: unable to find data in memory cache]" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.146174 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.205957 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-combined-ca-bundle\") pod \"98fc4d42-16a8-4051-afa0-e47332ee72bf\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.206089 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-config-data\") pod \"98fc4d42-16a8-4051-afa0-e47332ee72bf\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.206204 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6s7w\" (UniqueName: \"kubernetes.io/projected/98fc4d42-16a8-4051-afa0-e47332ee72bf-kube-api-access-j6s7w\") pod \"98fc4d42-16a8-4051-afa0-e47332ee72bf\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.206292 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-fernet-keys\") pod \"98fc4d42-16a8-4051-afa0-e47332ee72bf\" (UID: \"98fc4d42-16a8-4051-afa0-e47332ee72bf\") " Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.481566 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.482029 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.705839 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "98fc4d42-16a8-4051-afa0-e47332ee72bf" (UID: "98fc4d42-16a8-4051-afa0-e47332ee72bf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.706236 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fc4d42-16a8-4051-afa0-e47332ee72bf-kube-api-access-j6s7w" (OuterVolumeSpecName: "kube-api-access-j6s7w") pod "98fc4d42-16a8-4051-afa0-e47332ee72bf" (UID: "98fc4d42-16a8-4051-afa0-e47332ee72bf"). InnerVolumeSpecName "kube-api-access-j6s7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.716787 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6s7w\" (UniqueName: \"kubernetes.io/projected/98fc4d42-16a8-4051-afa0-e47332ee72bf-kube-api-access-j6s7w\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.716834 4767 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.735987 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98fc4d42-16a8-4051-afa0-e47332ee72bf" (UID: "98fc4d42-16a8-4051-afa0-e47332ee72bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.755401 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-q74h2" event={"ID":"98fc4d42-16a8-4051-afa0-e47332ee72bf","Type":"ContainerDied","Data":"1cac38eb003e8c5a32da0cb309261ca4493b2b306f64de6645f07af6bb2ee8d8"} Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.755445 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cac38eb003e8c5a32da0cb309261ca4493b2b306f64de6645f07af6bb2ee8d8" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.755512 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-q74h2" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.812320 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-config-data" (OuterVolumeSpecName: "config-data") pod "98fc4d42-16a8-4051-afa0-e47332ee72bf" (UID: "98fc4d42-16a8-4051-afa0-e47332ee72bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.818614 4767 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:05 crc kubenswrapper[4767]: I1124 23:01:05.818653 4767 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc4d42-16a8-4051-afa0-e47332ee72bf-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:35 crc kubenswrapper[4767]: I1124 23:01:35.481563 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:01:35 crc kubenswrapper[4767]: I1124 23:01:35.482094 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:01:35 crc kubenswrapper[4767]: I1124 23:01:35.482153 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 23:01:35 crc kubenswrapper[4767]: I1124 23:01:35.483154 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:01:35 crc kubenswrapper[4767]: I1124 23:01:35.483238 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" gracePeriod=600 Nov 24 23:01:35 crc kubenswrapper[4767]: E1124 23:01:35.618866 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:01:36 crc kubenswrapper[4767]: I1124 23:01:36.260005 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" exitCode=0 Nov 24 23:01:36 crc kubenswrapper[4767]: I1124 23:01:36.260110 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba"} Nov 24 23:01:36 crc kubenswrapper[4767]: I1124 23:01:36.260720 4767 scope.go:117] "RemoveContainer" containerID="65dd25949d2692848339b7c7f03d3a6b02a7879e37418fae805271d6028ce665" Nov 24 23:01:36 crc kubenswrapper[4767]: I1124 23:01:36.261625 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:01:36 crc kubenswrapper[4767]: E1124 23:01:36.262074 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:01:49 crc kubenswrapper[4767]: I1124 23:01:49.314366 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:01:49 crc kubenswrapper[4767]: E1124 23:01:49.315611 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:02:04 crc kubenswrapper[4767]: I1124 23:02:04.314904 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:02:04 crc kubenswrapper[4767]: E1124 23:02:04.316338 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:02:15 crc kubenswrapper[4767]: I1124 23:02:15.313991 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:02:15 crc kubenswrapper[4767]: E1124 23:02:15.315030 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.348549 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9vb4r"] Nov 24 23:02:18 crc kubenswrapper[4767]: E1124 23:02:18.351512 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fc4d42-16a8-4051-afa0-e47332ee72bf" containerName="keystone-cron" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.351544 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fc4d42-16a8-4051-afa0-e47332ee72bf" containerName="keystone-cron" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.351825 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fc4d42-16a8-4051-afa0-e47332ee72bf" containerName="keystone-cron" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.353336 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.363954 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vb4r"] Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.443957 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-catalog-content\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.444049 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-utilities\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.444131 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvzdn\" (UniqueName: \"kubernetes.io/projected/2991c530-a1f8-4c67-95c7-699e4c874712-kube-api-access-wvzdn\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.547910 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-catalog-content\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.547993 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-utilities\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.548064 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvzdn\" (UniqueName: \"kubernetes.io/projected/2991c530-a1f8-4c67-95c7-699e4c874712-kube-api-access-wvzdn\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.548587 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-catalog-content\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.548688 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-utilities\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.570838 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvzdn\" (UniqueName: \"kubernetes.io/projected/2991c530-a1f8-4c67-95c7-699e4c874712-kube-api-access-wvzdn\") pod \"redhat-operators-9vb4r\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:18 crc kubenswrapper[4767]: I1124 23:02:18.679649 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:19 crc kubenswrapper[4767]: I1124 23:02:19.116597 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vb4r"] Nov 24 23:02:19 crc kubenswrapper[4767]: I1124 23:02:19.807589 4767 generic.go:334] "Generic (PLEG): container finished" podID="2991c530-a1f8-4c67-95c7-699e4c874712" containerID="368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c" exitCode=0 Nov 24 23:02:19 crc kubenswrapper[4767]: I1124 23:02:19.807722 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vb4r" event={"ID":"2991c530-a1f8-4c67-95c7-699e4c874712","Type":"ContainerDied","Data":"368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c"} Nov 24 23:02:19 crc kubenswrapper[4767]: I1124 23:02:19.807982 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vb4r" event={"ID":"2991c530-a1f8-4c67-95c7-699e4c874712","Type":"ContainerStarted","Data":"b1567ff76272afc8250304ed53bbf84aab88df89e8961825d5a0db7f3780c6c1"} Nov 24 23:02:20 crc kubenswrapper[4767]: I1124 23:02:20.819469 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vb4r" event={"ID":"2991c530-a1f8-4c67-95c7-699e4c874712","Type":"ContainerStarted","Data":"f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8"} Nov 24 23:02:22 crc kubenswrapper[4767]: I1124 23:02:22.842451 4767 generic.go:334] "Generic (PLEG): container finished" podID="2991c530-a1f8-4c67-95c7-699e4c874712" containerID="f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8" exitCode=0 Nov 24 23:02:22 crc kubenswrapper[4767]: I1124 23:02:22.842515 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vb4r" event={"ID":"2991c530-a1f8-4c67-95c7-699e4c874712","Type":"ContainerDied","Data":"f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8"} Nov 24 23:02:23 crc kubenswrapper[4767]: I1124 23:02:23.862853 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vb4r" event={"ID":"2991c530-a1f8-4c67-95c7-699e4c874712","Type":"ContainerStarted","Data":"b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418"} Nov 24 23:02:23 crc kubenswrapper[4767]: I1124 23:02:23.891323 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9vb4r" podStartSLOduration=2.338937592 podStartE2EDuration="5.891300558s" podCreationTimestamp="2025-11-24 23:02:18 +0000 UTC" firstStartedPulling="2025-11-24 23:02:19.812165678 +0000 UTC m=+5022.729149080" lastFinishedPulling="2025-11-24 23:02:23.364528674 +0000 UTC m=+5026.281512046" observedRunningTime="2025-11-24 23:02:23.887795529 +0000 UTC m=+5026.804778911" watchObservedRunningTime="2025-11-24 23:02:23.891300558 +0000 UTC m=+5026.808283940" Nov 24 23:02:28 crc kubenswrapper[4767]: I1124 23:02:28.680769 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:28 crc kubenswrapper[4767]: I1124 23:02:28.681532 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:29 crc kubenswrapper[4767]: I1124 23:02:29.732739 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9vb4r" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="registry-server" probeResult="failure" output=< Nov 24 23:02:29 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 23:02:29 crc kubenswrapper[4767]: > Nov 24 23:02:30 crc kubenswrapper[4767]: I1124 23:02:30.313915 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:02:30 crc kubenswrapper[4767]: E1124 23:02:30.314401 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:02:38 crc kubenswrapper[4767]: I1124 23:02:38.758224 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:38 crc kubenswrapper[4767]: I1124 23:02:38.815153 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:39 crc kubenswrapper[4767]: I1124 23:02:39.000792 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vb4r"] Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.036000 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9vb4r" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="registry-server" containerID="cri-o://b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418" gracePeriod=2 Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.542587 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.613762 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-catalog-content\") pod \"2991c530-a1f8-4c67-95c7-699e4c874712\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.614021 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvzdn\" (UniqueName: \"kubernetes.io/projected/2991c530-a1f8-4c67-95c7-699e4c874712-kube-api-access-wvzdn\") pod \"2991c530-a1f8-4c67-95c7-699e4c874712\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.614098 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-utilities\") pod \"2991c530-a1f8-4c67-95c7-699e4c874712\" (UID: \"2991c530-a1f8-4c67-95c7-699e4c874712\") " Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.615010 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-utilities" (OuterVolumeSpecName: "utilities") pod "2991c530-a1f8-4c67-95c7-699e4c874712" (UID: "2991c530-a1f8-4c67-95c7-699e4c874712"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.619829 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2991c530-a1f8-4c67-95c7-699e4c874712-kube-api-access-wvzdn" (OuterVolumeSpecName: "kube-api-access-wvzdn") pod "2991c530-a1f8-4c67-95c7-699e4c874712" (UID: "2991c530-a1f8-4c67-95c7-699e4c874712"). InnerVolumeSpecName "kube-api-access-wvzdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.698631 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2991c530-a1f8-4c67-95c7-699e4c874712" (UID: "2991c530-a1f8-4c67-95c7-699e4c874712"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.716833 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.716873 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2991c530-a1f8-4c67-95c7-699e4c874712-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:02:40 crc kubenswrapper[4767]: I1124 23:02:40.716885 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvzdn\" (UniqueName: \"kubernetes.io/projected/2991c530-a1f8-4c67-95c7-699e4c874712-kube-api-access-wvzdn\") on node \"crc\" DevicePath \"\"" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.053854 4767 generic.go:334] "Generic (PLEG): container finished" podID="2991c530-a1f8-4c67-95c7-699e4c874712" containerID="b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418" exitCode=0 Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.053917 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vb4r" event={"ID":"2991c530-a1f8-4c67-95c7-699e4c874712","Type":"ContainerDied","Data":"b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418"} Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.053957 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vb4r" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.053993 4767 scope.go:117] "RemoveContainer" containerID="b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.053973 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vb4r" event={"ID":"2991c530-a1f8-4c67-95c7-699e4c874712","Type":"ContainerDied","Data":"b1567ff76272afc8250304ed53bbf84aab88df89e8961825d5a0db7f3780c6c1"} Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.103665 4767 scope.go:117] "RemoveContainer" containerID="f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.122650 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vb4r"] Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.135130 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9vb4r"] Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.153865 4767 scope.go:117] "RemoveContainer" containerID="368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.215758 4767 scope.go:117] "RemoveContainer" containerID="b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418" Nov 24 23:02:41 crc kubenswrapper[4767]: E1124 23:02:41.216186 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418\": container with ID starting with b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418 not found: ID does not exist" containerID="b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.216232 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418"} err="failed to get container status \"b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418\": rpc error: code = NotFound desc = could not find container \"b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418\": container with ID starting with b25dae5767c0ccc31300fdd4429385c22a1876663a8e4b51dd1f5a6e09157418 not found: ID does not exist" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.216265 4767 scope.go:117] "RemoveContainer" containerID="f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8" Nov 24 23:02:41 crc kubenswrapper[4767]: E1124 23:02:41.216865 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8\": container with ID starting with f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8 not found: ID does not exist" containerID="f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.216902 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8"} err="failed to get container status \"f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8\": rpc error: code = NotFound desc = could not find container \"f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8\": container with ID starting with f27a50c6364ebe651ea7dfe58d88d0d8916dad3da83f65c9d799e94b6b124da8 not found: ID does not exist" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.216925 4767 scope.go:117] "RemoveContainer" containerID="368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c" Nov 24 23:02:41 crc kubenswrapper[4767]: E1124 23:02:41.217155 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c\": container with ID starting with 368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c not found: ID does not exist" containerID="368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.217189 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c"} err="failed to get container status \"368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c\": rpc error: code = NotFound desc = could not find container \"368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c\": container with ID starting with 368b20341daf4ff153bccb57b056368f09dca2bf38b470afdf3baf770c1e179c not found: ID does not exist" Nov 24 23:02:41 crc kubenswrapper[4767]: I1124 23:02:41.313404 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:02:41 crc kubenswrapper[4767]: E1124 23:02:41.313783 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:02:42 crc kubenswrapper[4767]: I1124 23:02:42.338073 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" path="/var/lib/kubelet/pods/2991c530-a1f8-4c67-95c7-699e4c874712/volumes" Nov 24 23:02:55 crc kubenswrapper[4767]: I1124 23:02:55.314051 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:02:55 crc kubenswrapper[4767]: E1124 23:02:55.314986 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:03:10 crc kubenswrapper[4767]: I1124 23:03:10.314136 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:03:10 crc kubenswrapper[4767]: E1124 23:03:10.315221 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:03:21 crc kubenswrapper[4767]: I1124 23:03:21.313406 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:03:21 crc kubenswrapper[4767]: E1124 23:03:21.314355 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:03:23 crc kubenswrapper[4767]: I1124 23:03:23.801096 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b5a55be5-98af-48c4-800f-1595cb7e1959" containerName="galera" probeResult="failure" output="command timed out" Nov 24 23:03:34 crc kubenswrapper[4767]: I1124 23:03:34.314754 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:03:34 crc kubenswrapper[4767]: E1124 23:03:34.315898 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:03:48 crc kubenswrapper[4767]: I1124 23:03:48.320892 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:03:48 crc kubenswrapper[4767]: E1124 23:03:48.322055 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:04:00 crc kubenswrapper[4767]: I1124 23:04:00.313495 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:04:00 crc kubenswrapper[4767]: E1124 23:04:00.314648 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.917433 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqbhj"] Nov 24 23:04:05 crc kubenswrapper[4767]: E1124 23:04:05.918371 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="registry-server" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.918384 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="registry-server" Nov 24 23:04:05 crc kubenswrapper[4767]: E1124 23:04:05.918413 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="extract-content" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.918419 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="extract-content" Nov 24 23:04:05 crc kubenswrapper[4767]: E1124 23:04:05.918428 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="extract-utilities" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.918434 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="extract-utilities" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.918605 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="2991c530-a1f8-4c67-95c7-699e4c874712" containerName="registry-server" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.920007 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.957681 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqbhj"] Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.999656 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-utilities\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:05 crc kubenswrapper[4767]: I1124 23:04:05.999807 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-catalog-content\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.000121 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t88b7\" (UniqueName: \"kubernetes.io/projected/85abcd26-66f6-44f6-b908-471ce7416474-kube-api-access-t88b7\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.102199 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-utilities\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.102254 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-catalog-content\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.102373 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t88b7\" (UniqueName: \"kubernetes.io/projected/85abcd26-66f6-44f6-b908-471ce7416474-kube-api-access-t88b7\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.102765 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-utilities\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.102834 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-catalog-content\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.121941 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t88b7\" (UniqueName: \"kubernetes.io/projected/85abcd26-66f6-44f6-b908-471ce7416474-kube-api-access-t88b7\") pod \"redhat-marketplace-hqbhj\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.254371 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:06 crc kubenswrapper[4767]: I1124 23:04:06.739349 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqbhj"] Nov 24 23:04:07 crc kubenswrapper[4767]: I1124 23:04:07.078012 4767 generic.go:334] "Generic (PLEG): container finished" podID="85abcd26-66f6-44f6-b908-471ce7416474" containerID="8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce" exitCode=0 Nov 24 23:04:07 crc kubenswrapper[4767]: I1124 23:04:07.078400 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqbhj" event={"ID":"85abcd26-66f6-44f6-b908-471ce7416474","Type":"ContainerDied","Data":"8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce"} Nov 24 23:04:07 crc kubenswrapper[4767]: I1124 23:04:07.078450 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqbhj" event={"ID":"85abcd26-66f6-44f6-b908-471ce7416474","Type":"ContainerStarted","Data":"0208fe6499af4a93855f76c87d3d6154f718b839dfa5922826dc52961c8a5aac"} Nov 24 23:04:08 crc kubenswrapper[4767]: I1124 23:04:08.091493 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqbhj" event={"ID":"85abcd26-66f6-44f6-b908-471ce7416474","Type":"ContainerStarted","Data":"714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1"} Nov 24 23:04:09 crc kubenswrapper[4767]: I1124 23:04:09.105696 4767 generic.go:334] "Generic (PLEG): container finished" podID="85abcd26-66f6-44f6-b908-471ce7416474" containerID="714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1" exitCode=0 Nov 24 23:04:09 crc kubenswrapper[4767]: I1124 23:04:09.105830 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqbhj" event={"ID":"85abcd26-66f6-44f6-b908-471ce7416474","Type":"ContainerDied","Data":"714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1"} Nov 24 23:04:10 crc kubenswrapper[4767]: I1124 23:04:10.119794 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqbhj" event={"ID":"85abcd26-66f6-44f6-b908-471ce7416474","Type":"ContainerStarted","Data":"7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d"} Nov 24 23:04:10 crc kubenswrapper[4767]: I1124 23:04:10.141944 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqbhj" podStartSLOduration=2.728407465 podStartE2EDuration="5.1419242s" podCreationTimestamp="2025-11-24 23:04:05 +0000 UTC" firstStartedPulling="2025-11-24 23:04:07.080245337 +0000 UTC m=+5129.997228709" lastFinishedPulling="2025-11-24 23:04:09.493762072 +0000 UTC m=+5132.410745444" observedRunningTime="2025-11-24 23:04:10.134143 +0000 UTC m=+5133.051126422" watchObservedRunningTime="2025-11-24 23:04:10.1419242 +0000 UTC m=+5133.058907582" Nov 24 23:04:14 crc kubenswrapper[4767]: I1124 23:04:14.314699 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:04:14 crc kubenswrapper[4767]: E1124 23:04:14.317024 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:04:16 crc kubenswrapper[4767]: I1124 23:04:16.254809 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:16 crc kubenswrapper[4767]: I1124 23:04:16.255379 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:16 crc kubenswrapper[4767]: I1124 23:04:16.332948 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:17 crc kubenswrapper[4767]: I1124 23:04:17.266706 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:17 crc kubenswrapper[4767]: I1124 23:04:17.328669 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqbhj"] Nov 24 23:04:19 crc kubenswrapper[4767]: I1124 23:04:19.215483 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hqbhj" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="registry-server" containerID="cri-o://7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d" gracePeriod=2 Nov 24 23:04:19 crc kubenswrapper[4767]: I1124 23:04:19.853488 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:19 crc kubenswrapper[4767]: I1124 23:04:19.994010 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-utilities\") pod \"85abcd26-66f6-44f6-b908-471ce7416474\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " Nov 24 23:04:19 crc kubenswrapper[4767]: I1124 23:04:19.994580 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t88b7\" (UniqueName: \"kubernetes.io/projected/85abcd26-66f6-44f6-b908-471ce7416474-kube-api-access-t88b7\") pod \"85abcd26-66f6-44f6-b908-471ce7416474\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " Nov 24 23:04:19 crc kubenswrapper[4767]: I1124 23:04:19.994684 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-catalog-content\") pod \"85abcd26-66f6-44f6-b908-471ce7416474\" (UID: \"85abcd26-66f6-44f6-b908-471ce7416474\") " Nov 24 23:04:19 crc kubenswrapper[4767]: I1124 23:04:19.995523 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-utilities" (OuterVolumeSpecName: "utilities") pod "85abcd26-66f6-44f6-b908-471ce7416474" (UID: "85abcd26-66f6-44f6-b908-471ce7416474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.003740 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85abcd26-66f6-44f6-b908-471ce7416474-kube-api-access-t88b7" (OuterVolumeSpecName: "kube-api-access-t88b7") pod "85abcd26-66f6-44f6-b908-471ce7416474" (UID: "85abcd26-66f6-44f6-b908-471ce7416474"). InnerVolumeSpecName "kube-api-access-t88b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.032981 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85abcd26-66f6-44f6-b908-471ce7416474" (UID: "85abcd26-66f6-44f6-b908-471ce7416474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.097005 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.097048 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85abcd26-66f6-44f6-b908-471ce7416474-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.097063 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t88b7\" (UniqueName: \"kubernetes.io/projected/85abcd26-66f6-44f6-b908-471ce7416474-kube-api-access-t88b7\") on node \"crc\" DevicePath \"\"" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.232122 4767 generic.go:334] "Generic (PLEG): container finished" podID="85abcd26-66f6-44f6-b908-471ce7416474" containerID="7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d" exitCode=0 Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.232226 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqbhj" event={"ID":"85abcd26-66f6-44f6-b908-471ce7416474","Type":"ContainerDied","Data":"7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d"} Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.232410 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqbhj" event={"ID":"85abcd26-66f6-44f6-b908-471ce7416474","Type":"ContainerDied","Data":"0208fe6499af4a93855f76c87d3d6154f718b839dfa5922826dc52961c8a5aac"} Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.232239 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqbhj" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.232576 4767 scope.go:117] "RemoveContainer" containerID="7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.298437 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqbhj"] Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.298464 4767 scope.go:117] "RemoveContainer" containerID="714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.308952 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqbhj"] Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.329931 4767 scope.go:117] "RemoveContainer" containerID="8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.331176 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85abcd26-66f6-44f6-b908-471ce7416474" path="/var/lib/kubelet/pods/85abcd26-66f6-44f6-b908-471ce7416474/volumes" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.392717 4767 scope.go:117] "RemoveContainer" containerID="7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d" Nov 24 23:04:20 crc kubenswrapper[4767]: E1124 23:04:20.393398 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d\": container with ID starting with 7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d not found: ID does not exist" containerID="7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.393456 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d"} err="failed to get container status \"7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d\": rpc error: code = NotFound desc = could not find container \"7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d\": container with ID starting with 7e7de9f05be8fb01d8ab6b0ad839d17a9f6dbdb3438fe19fe27e67dd160b4d3d not found: ID does not exist" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.393495 4767 scope.go:117] "RemoveContainer" containerID="714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1" Nov 24 23:04:20 crc kubenswrapper[4767]: E1124 23:04:20.393947 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1\": container with ID starting with 714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1 not found: ID does not exist" containerID="714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.394003 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1"} err="failed to get container status \"714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1\": rpc error: code = NotFound desc = could not find container \"714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1\": container with ID starting with 714b8f138bc14950e52fceafe3620471ce9bec77a1cbf16daa5e34499100f5d1 not found: ID does not exist" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.394040 4767 scope.go:117] "RemoveContainer" containerID="8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce" Nov 24 23:04:20 crc kubenswrapper[4767]: E1124 23:04:20.394531 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce\": container with ID starting with 8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce not found: ID does not exist" containerID="8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce" Nov 24 23:04:20 crc kubenswrapper[4767]: I1124 23:04:20.394586 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce"} err="failed to get container status \"8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce\": rpc error: code = NotFound desc = could not find container \"8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce\": container with ID starting with 8fe3ec351c3d6341d5a67db017e884981aadd4f6dfc84b96a8ea3f0af48132ce not found: ID does not exist" Nov 24 23:04:29 crc kubenswrapper[4767]: I1124 23:04:29.313997 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:04:29 crc kubenswrapper[4767]: E1124 23:04:29.314929 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:04:42 crc kubenswrapper[4767]: I1124 23:04:42.314520 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:04:42 crc kubenswrapper[4767]: E1124 23:04:42.315460 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:04:57 crc kubenswrapper[4767]: I1124 23:04:57.313545 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:04:57 crc kubenswrapper[4767]: E1124 23:04:57.314593 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:05:09 crc kubenswrapper[4767]: I1124 23:05:09.313887 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:05:09 crc kubenswrapper[4767]: E1124 23:05:09.314780 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:05:22 crc kubenswrapper[4767]: I1124 23:05:22.313685 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:05:22 crc kubenswrapper[4767]: E1124 23:05:22.314595 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:05:34 crc kubenswrapper[4767]: I1124 23:05:34.314969 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:05:34 crc kubenswrapper[4767]: E1124 23:05:34.315921 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:05:45 crc kubenswrapper[4767]: I1124 23:05:45.314408 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:05:45 crc kubenswrapper[4767]: E1124 23:05:45.315388 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:05:59 crc kubenswrapper[4767]: I1124 23:05:59.314649 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:05:59 crc kubenswrapper[4767]: E1124 23:05:59.315668 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:06:14 crc kubenswrapper[4767]: I1124 23:06:14.315215 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:06:14 crc kubenswrapper[4767]: E1124 23:06:14.316564 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:06:28 crc kubenswrapper[4767]: I1124 23:06:28.319938 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:06:28 crc kubenswrapper[4767]: E1124 23:06:28.320867 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:06:41 crc kubenswrapper[4767]: I1124 23:06:41.313879 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:06:42 crc kubenswrapper[4767]: I1124 23:06:42.197929 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"2bfe974092644fe445db1ce371a14acca485ed51c9e01ecfd766ba80f7f58d2a"} Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.834103 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n754w"] Nov 24 23:08:05 crc kubenswrapper[4767]: E1124 23:08:05.860844 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="extract-utilities" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.860878 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="extract-utilities" Nov 24 23:08:05 crc kubenswrapper[4767]: E1124 23:08:05.860898 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="extract-content" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.860906 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="extract-content" Nov 24 23:08:05 crc kubenswrapper[4767]: E1124 23:08:05.860934 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="registry-server" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.860941 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="registry-server" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.861232 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="85abcd26-66f6-44f6-b908-471ce7416474" containerName="registry-server" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.863005 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.866229 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n754w"] Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.955115 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-catalog-content\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.955539 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-utilities\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:05 crc kubenswrapper[4767]: I1124 23:08:05.955596 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl72d\" (UniqueName: \"kubernetes.io/projected/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-kube-api-access-nl72d\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.057246 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-catalog-content\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.057413 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-utilities\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.057456 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl72d\" (UniqueName: \"kubernetes.io/projected/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-kube-api-access-nl72d\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.058876 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-utilities\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.059068 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-catalog-content\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.088537 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl72d\" (UniqueName: \"kubernetes.io/projected/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-kube-api-access-nl72d\") pod \"certified-operators-n754w\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.184284 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:06 crc kubenswrapper[4767]: I1124 23:08:06.696265 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n754w"] Nov 24 23:08:07 crc kubenswrapper[4767]: I1124 23:08:07.227080 4767 generic.go:334] "Generic (PLEG): container finished" podID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerID="d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f" exitCode=0 Nov 24 23:08:07 crc kubenswrapper[4767]: I1124 23:08:07.227173 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n754w" event={"ID":"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf","Type":"ContainerDied","Data":"d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f"} Nov 24 23:08:07 crc kubenswrapper[4767]: I1124 23:08:07.227483 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n754w" event={"ID":"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf","Type":"ContainerStarted","Data":"ade89042259522184f9e9c69c163a1d12c1f09cf5fe2000929bc10dfff6c9514"} Nov 24 23:08:07 crc kubenswrapper[4767]: I1124 23:08:07.229970 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:08:08 crc kubenswrapper[4767]: I1124 23:08:08.239702 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n754w" event={"ID":"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf","Type":"ContainerStarted","Data":"078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5"} Nov 24 23:08:09 crc kubenswrapper[4767]: I1124 23:08:09.253517 4767 generic.go:334] "Generic (PLEG): container finished" podID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerID="078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5" exitCode=0 Nov 24 23:08:09 crc kubenswrapper[4767]: I1124 23:08:09.253575 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n754w" event={"ID":"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf","Type":"ContainerDied","Data":"078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5"} Nov 24 23:08:10 crc kubenswrapper[4767]: I1124 23:08:10.272602 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n754w" event={"ID":"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf","Type":"ContainerStarted","Data":"d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6"} Nov 24 23:08:10 crc kubenswrapper[4767]: I1124 23:08:10.299528 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n754w" podStartSLOduration=2.907984034 podStartE2EDuration="5.299511143s" podCreationTimestamp="2025-11-24 23:08:05 +0000 UTC" firstStartedPulling="2025-11-24 23:08:07.229538011 +0000 UTC m=+5370.146521423" lastFinishedPulling="2025-11-24 23:08:09.62106515 +0000 UTC m=+5372.538048532" observedRunningTime="2025-11-24 23:08:10.291203968 +0000 UTC m=+5373.208187350" watchObservedRunningTime="2025-11-24 23:08:10.299511143 +0000 UTC m=+5373.216494515" Nov 24 23:08:16 crc kubenswrapper[4767]: I1124 23:08:16.184254 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:16 crc kubenswrapper[4767]: I1124 23:08:16.185178 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:16 crc kubenswrapper[4767]: I1124 23:08:16.275395 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:16 crc kubenswrapper[4767]: I1124 23:08:16.384059 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:16 crc kubenswrapper[4767]: I1124 23:08:16.523009 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n754w"] Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.350571 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n754w" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="registry-server" containerID="cri-o://d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6" gracePeriod=2 Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.836435 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.926121 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-catalog-content\") pod \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.926186 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-utilities\") pod \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.926244 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl72d\" (UniqueName: \"kubernetes.io/projected/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-kube-api-access-nl72d\") pod \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\" (UID: \"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf\") " Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.927323 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-utilities" (OuterVolumeSpecName: "utilities") pod "f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" (UID: "f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.933260 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-kube-api-access-nl72d" (OuterVolumeSpecName: "kube-api-access-nl72d") pod "f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" (UID: "f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf"). InnerVolumeSpecName "kube-api-access-nl72d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:08:18 crc kubenswrapper[4767]: I1124 23:08:18.979018 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" (UID: "f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.032089 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.032487 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.032500 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl72d\" (UniqueName: \"kubernetes.io/projected/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf-kube-api-access-nl72d\") on node \"crc\" DevicePath \"\"" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.364374 4767 generic.go:334] "Generic (PLEG): container finished" podID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerID="d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6" exitCode=0 Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.364422 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n754w" event={"ID":"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf","Type":"ContainerDied","Data":"d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6"} Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.364454 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n754w" event={"ID":"f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf","Type":"ContainerDied","Data":"ade89042259522184f9e9c69c163a1d12c1f09cf5fe2000929bc10dfff6c9514"} Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.364475 4767 scope.go:117] "RemoveContainer" containerID="d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.364500 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n754w" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.391518 4767 scope.go:117] "RemoveContainer" containerID="078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.422294 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n754w"] Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.433140 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n754w"] Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.642324 4767 scope.go:117] "RemoveContainer" containerID="d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.672965 4767 scope.go:117] "RemoveContainer" containerID="d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6" Nov 24 23:08:19 crc kubenswrapper[4767]: E1124 23:08:19.673406 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6\": container with ID starting with d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6 not found: ID does not exist" containerID="d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.673490 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6"} err="failed to get container status \"d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6\": rpc error: code = NotFound desc = could not find container \"d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6\": container with ID starting with d57168c0194fe2bd2efb58810129e842563863fe04251214573b34ce079bcff6 not found: ID does not exist" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.673547 4767 scope.go:117] "RemoveContainer" containerID="078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5" Nov 24 23:08:19 crc kubenswrapper[4767]: E1124 23:08:19.674978 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5\": container with ID starting with 078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5 not found: ID does not exist" containerID="078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.675017 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5"} err="failed to get container status \"078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5\": rpc error: code = NotFound desc = could not find container \"078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5\": container with ID starting with 078d6a438fb3fae42aab5fcbe4549f64fdcc8d0262c5728330bfd37ce7ac8fe5 not found: ID does not exist" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.675040 4767 scope.go:117] "RemoveContainer" containerID="d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f" Nov 24 23:08:19 crc kubenswrapper[4767]: E1124 23:08:19.675343 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f\": container with ID starting with d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f not found: ID does not exist" containerID="d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f" Nov 24 23:08:19 crc kubenswrapper[4767]: I1124 23:08:19.675369 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f"} err="failed to get container status \"d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f\": rpc error: code = NotFound desc = could not find container \"d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f\": container with ID starting with d1c4c70fb6343968c7693ce1960ab278e781da22eae6b21747a12403c607154f not found: ID does not exist" Nov 24 23:08:20 crc kubenswrapper[4767]: I1124 23:08:20.329473 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" path="/var/lib/kubelet/pods/f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf/volumes" Nov 24 23:09:05 crc kubenswrapper[4767]: I1124 23:09:05.481614 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:09:05 crc kubenswrapper[4767]: I1124 23:09:05.483780 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:09:35 crc kubenswrapper[4767]: I1124 23:09:35.481982 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:09:35 crc kubenswrapper[4767]: I1124 23:09:35.482671 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:10:05 crc kubenswrapper[4767]: I1124 23:10:05.481777 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:10:05 crc kubenswrapper[4767]: I1124 23:10:05.482815 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:10:05 crc kubenswrapper[4767]: I1124 23:10:05.482892 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 23:10:05 crc kubenswrapper[4767]: I1124 23:10:05.484173 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bfe974092644fe445db1ce371a14acca485ed51c9e01ecfd766ba80f7f58d2a"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:10:05 crc kubenswrapper[4767]: I1124 23:10:05.484524 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://2bfe974092644fe445db1ce371a14acca485ed51c9e01ecfd766ba80f7f58d2a" gracePeriod=600 Nov 24 23:10:06 crc kubenswrapper[4767]: I1124 23:10:06.631888 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="2bfe974092644fe445db1ce371a14acca485ed51c9e01ecfd766ba80f7f58d2a" exitCode=0 Nov 24 23:10:06 crc kubenswrapper[4767]: I1124 23:10:06.632129 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"2bfe974092644fe445db1ce371a14acca485ed51c9e01ecfd766ba80f7f58d2a"} Nov 24 23:10:06 crc kubenswrapper[4767]: I1124 23:10:06.632258 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0"} Nov 24 23:10:06 crc kubenswrapper[4767]: I1124 23:10:06.632310 4767 scope.go:117] "RemoveContainer" containerID="e2a35c1d7e7f296e24c8aebfe33e1c98fcab14e8710507fe8bce6caf7c7c8bba" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.148760 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gx8db"] Nov 24 23:10:39 crc kubenswrapper[4767]: E1124 23:10:39.149719 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="extract-utilities" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.149736 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="extract-utilities" Nov 24 23:10:39 crc kubenswrapper[4767]: E1124 23:10:39.149755 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="registry-server" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.149763 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="registry-server" Nov 24 23:10:39 crc kubenswrapper[4767]: E1124 23:10:39.149782 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="extract-content" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.149790 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="extract-content" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.150060 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ff3ca0-7ce5-4b15-8e2f-b2e7673ca5cf" containerName="registry-server" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.151758 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.169595 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx8db"] Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.313117 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-utilities\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.313457 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7st\" (UniqueName: \"kubernetes.io/projected/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-kube-api-access-qm7st\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.313786 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-catalog-content\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.416049 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-catalog-content\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.416157 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-utilities\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.416209 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm7st\" (UniqueName: \"kubernetes.io/projected/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-kube-api-access-qm7st\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.416725 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-utilities\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.417321 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-catalog-content\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.436621 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm7st\" (UniqueName: \"kubernetes.io/projected/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-kube-api-access-qm7st\") pod \"community-operators-gx8db\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:39 crc kubenswrapper[4767]: I1124 23:10:39.497582 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:40 crc kubenswrapper[4767]: I1124 23:10:40.013377 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx8db"] Nov 24 23:10:40 crc kubenswrapper[4767]: W1124 23:10:40.016235 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d8dca76_4da2_4160_93c1_5f1e3cc1aebb.slice/crio-028eb378f49633a986278a5aae391dd43b7174f17f4bce97836fb7f33847337f WatchSource:0}: Error finding container 028eb378f49633a986278a5aae391dd43b7174f17f4bce97836fb7f33847337f: Status 404 returned error can't find the container with id 028eb378f49633a986278a5aae391dd43b7174f17f4bce97836fb7f33847337f Nov 24 23:10:41 crc kubenswrapper[4767]: I1124 23:10:41.033655 4767 generic.go:334] "Generic (PLEG): container finished" podID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerID="fab7c5034a399cde485f371dcec8586c1f194880c781f8c511cf8210e5fa48b1" exitCode=0 Nov 24 23:10:41 crc kubenswrapper[4767]: I1124 23:10:41.033739 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx8db" event={"ID":"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb","Type":"ContainerDied","Data":"fab7c5034a399cde485f371dcec8586c1f194880c781f8c511cf8210e5fa48b1"} Nov 24 23:10:41 crc kubenswrapper[4767]: I1124 23:10:41.034143 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx8db" event={"ID":"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb","Type":"ContainerStarted","Data":"028eb378f49633a986278a5aae391dd43b7174f17f4bce97836fb7f33847337f"} Nov 24 23:10:42 crc kubenswrapper[4767]: I1124 23:10:42.055562 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx8db" event={"ID":"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb","Type":"ContainerStarted","Data":"6402831d61bf996d86695687891c4735d6c54ede2ad4c7a650929a5b5d22e26c"} Nov 24 23:10:43 crc kubenswrapper[4767]: I1124 23:10:43.071921 4767 generic.go:334] "Generic (PLEG): container finished" podID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerID="6402831d61bf996d86695687891c4735d6c54ede2ad4c7a650929a5b5d22e26c" exitCode=0 Nov 24 23:10:43 crc kubenswrapper[4767]: I1124 23:10:43.071970 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx8db" event={"ID":"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb","Type":"ContainerDied","Data":"6402831d61bf996d86695687891c4735d6c54ede2ad4c7a650929a5b5d22e26c"} Nov 24 23:10:44 crc kubenswrapper[4767]: I1124 23:10:44.086129 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx8db" event={"ID":"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb","Type":"ContainerStarted","Data":"206bed68547d9677bd82f6c4358ba56aaca321f0a05a8f7db14364d3ca6f76ef"} Nov 24 23:10:44 crc kubenswrapper[4767]: I1124 23:10:44.123497 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gx8db" podStartSLOduration=2.678952565 podStartE2EDuration="5.123469582s" podCreationTimestamp="2025-11-24 23:10:39 +0000 UTC" firstStartedPulling="2025-11-24 23:10:41.037774246 +0000 UTC m=+5523.954757648" lastFinishedPulling="2025-11-24 23:10:43.482291283 +0000 UTC m=+5526.399274665" observedRunningTime="2025-11-24 23:10:44.10853702 +0000 UTC m=+5527.025520402" watchObservedRunningTime="2025-11-24 23:10:44.123469582 +0000 UTC m=+5527.040452994" Nov 24 23:10:49 crc kubenswrapper[4767]: I1124 23:10:49.498719 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:49 crc kubenswrapper[4767]: I1124 23:10:49.499024 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:50 crc kubenswrapper[4767]: I1124 23:10:50.246757 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:50 crc kubenswrapper[4767]: I1124 23:10:50.304128 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:50 crc kubenswrapper[4767]: I1124 23:10:50.491016 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx8db"] Nov 24 23:10:52 crc kubenswrapper[4767]: I1124 23:10:52.168435 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gx8db" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="registry-server" containerID="cri-o://206bed68547d9677bd82f6c4358ba56aaca321f0a05a8f7db14364d3ca6f76ef" gracePeriod=2 Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.179311 4767 generic.go:334] "Generic (PLEG): container finished" podID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerID="206bed68547d9677bd82f6c4358ba56aaca321f0a05a8f7db14364d3ca6f76ef" exitCode=0 Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.179424 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx8db" event={"ID":"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb","Type":"ContainerDied","Data":"206bed68547d9677bd82f6c4358ba56aaca321f0a05a8f7db14364d3ca6f76ef"} Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.179600 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx8db" event={"ID":"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb","Type":"ContainerDied","Data":"028eb378f49633a986278a5aae391dd43b7174f17f4bce97836fb7f33847337f"} Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.179617 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028eb378f49633a986278a5aae391dd43b7174f17f4bce97836fb7f33847337f" Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.216013 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.323528 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm7st\" (UniqueName: \"kubernetes.io/projected/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-kube-api-access-qm7st\") pod \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.323688 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-catalog-content\") pod \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.323750 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-utilities\") pod \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\" (UID: \"9d8dca76-4da2-4160-93c1-5f1e3cc1aebb\") " Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.324945 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-utilities" (OuterVolumeSpecName: "utilities") pod "9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" (UID: "9d8dca76-4da2-4160-93c1-5f1e3cc1aebb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.329997 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-kube-api-access-qm7st" (OuterVolumeSpecName: "kube-api-access-qm7st") pod "9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" (UID: "9d8dca76-4da2-4160-93c1-5f1e3cc1aebb"). InnerVolumeSpecName "kube-api-access-qm7st". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.375140 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" (UID: "9d8dca76-4da2-4160-93c1-5f1e3cc1aebb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.426336 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.426364 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:10:53 crc kubenswrapper[4767]: I1124 23:10:53.426374 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm7st\" (UniqueName: \"kubernetes.io/projected/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb-kube-api-access-qm7st\") on node \"crc\" DevicePath \"\"" Nov 24 23:10:54 crc kubenswrapper[4767]: I1124 23:10:54.209008 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx8db" Nov 24 23:10:54 crc kubenswrapper[4767]: I1124 23:10:54.265785 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx8db"] Nov 24 23:10:54 crc kubenswrapper[4767]: I1124 23:10:54.278243 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gx8db"] Nov 24 23:10:54 crc kubenswrapper[4767]: I1124 23:10:54.326586 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" path="/var/lib/kubelet/pods/9d8dca76-4da2-4160-93c1-5f1e3cc1aebb/volumes" Nov 24 23:12:05 crc kubenswrapper[4767]: I1124 23:12:05.481533 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:12:05 crc kubenswrapper[4767]: I1124 23:12:05.483046 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:12:35 crc kubenswrapper[4767]: I1124 23:12:35.481156 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:12:35 crc kubenswrapper[4767]: I1124 23:12:35.482148 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.482041 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.483222 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.483325 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.484384 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.484486 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" gracePeriod=600 Nov 24 23:13:05 crc kubenswrapper[4767]: E1124 23:13:05.604543 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.664442 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" exitCode=0 Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.664516 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0"} Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.664810 4767 scope.go:117] "RemoveContainer" containerID="2bfe974092644fe445db1ce371a14acca485ed51c9e01ecfd766ba80f7f58d2a" Nov 24 23:13:05 crc kubenswrapper[4767]: I1124 23:13:05.665395 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:13:05 crc kubenswrapper[4767]: E1124 23:13:05.665980 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:13:17 crc kubenswrapper[4767]: I1124 23:13:17.314303 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:13:17 crc kubenswrapper[4767]: E1124 23:13:17.315807 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:13:29 crc kubenswrapper[4767]: I1124 23:13:29.314175 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:13:29 crc kubenswrapper[4767]: E1124 23:13:29.315169 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:13:42 crc kubenswrapper[4767]: I1124 23:13:42.313671 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:13:42 crc kubenswrapper[4767]: E1124 23:13:42.315007 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:13:57 crc kubenswrapper[4767]: I1124 23:13:57.315183 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:13:57 crc kubenswrapper[4767]: E1124 23:13:57.316628 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:14:08 crc kubenswrapper[4767]: I1124 23:14:08.321633 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:14:08 crc kubenswrapper[4767]: E1124 23:14:08.322765 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:14:22 crc kubenswrapper[4767]: I1124 23:14:22.314205 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:14:22 crc kubenswrapper[4767]: E1124 23:14:22.315314 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:14:34 crc kubenswrapper[4767]: I1124 23:14:34.314541 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:14:34 crc kubenswrapper[4767]: E1124 23:14:34.315955 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:14:49 crc kubenswrapper[4767]: I1124 23:14:49.314208 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:14:49 crc kubenswrapper[4767]: E1124 23:14:49.315207 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.174422 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7"] Nov 24 23:15:00 crc kubenswrapper[4767]: E1124 23:15:00.175619 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="extract-utilities" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.175642 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="extract-utilities" Nov 24 23:15:00 crc kubenswrapper[4767]: E1124 23:15:00.175677 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="registry-server" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.175690 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="registry-server" Nov 24 23:15:00 crc kubenswrapper[4767]: E1124 23:15:00.175729 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="extract-content" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.175743 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="extract-content" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.176140 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8dca76-4da2-4160-93c1-5f1e3cc1aebb" containerName="registry-server" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.177409 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.181181 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.181251 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.227151 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7"] Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.325038 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf42j\" (UniqueName: \"kubernetes.io/projected/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-kube-api-access-gf42j\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.325078 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-config-volume\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.325122 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-secret-volume\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.426803 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf42j\" (UniqueName: \"kubernetes.io/projected/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-kube-api-access-gf42j\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.426852 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-config-volume\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.426881 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-secret-volume\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.428462 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-config-volume\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.435905 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-secret-volume\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.452952 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf42j\" (UniqueName: \"kubernetes.io/projected/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-kube-api-access-gf42j\") pod \"collect-profiles-29400435-8spm7\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:00 crc kubenswrapper[4767]: I1124 23:15:00.520603 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:01 crc kubenswrapper[4767]: I1124 23:15:01.017968 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7"] Nov 24 23:15:01 crc kubenswrapper[4767]: I1124 23:15:01.313476 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:15:01 crc kubenswrapper[4767]: E1124 23:15:01.314025 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:15:02 crc kubenswrapper[4767]: I1124 23:15:02.006448 4767 generic.go:334] "Generic (PLEG): container finished" podID="0df50f81-ce7c-4e3c-9f29-4b5338d0408e" containerID="833cf1c157b8767d5bb90723cd16d3377a8d693ff90dd0d4650f357c602252e7" exitCode=0 Nov 24 23:15:02 crc kubenswrapper[4767]: I1124 23:15:02.006488 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" event={"ID":"0df50f81-ce7c-4e3c-9f29-4b5338d0408e","Type":"ContainerDied","Data":"833cf1c157b8767d5bb90723cd16d3377a8d693ff90dd0d4650f357c602252e7"} Nov 24 23:15:02 crc kubenswrapper[4767]: I1124 23:15:02.006512 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" event={"ID":"0df50f81-ce7c-4e3c-9f29-4b5338d0408e","Type":"ContainerStarted","Data":"581713d0c504bbf733880e3ada847ee11be48d41826eaef3cb2babf11d91c4c8"} Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.426349 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.492631 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf42j\" (UniqueName: \"kubernetes.io/projected/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-kube-api-access-gf42j\") pod \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.492764 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-secret-volume\") pod \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.492846 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-config-volume\") pod \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\" (UID: \"0df50f81-ce7c-4e3c-9f29-4b5338d0408e\") " Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.493593 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-config-volume" (OuterVolumeSpecName: "config-volume") pod "0df50f81-ce7c-4e3c-9f29-4b5338d0408e" (UID: "0df50f81-ce7c-4e3c-9f29-4b5338d0408e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.498609 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0df50f81-ce7c-4e3c-9f29-4b5338d0408e" (UID: "0df50f81-ce7c-4e3c-9f29-4b5338d0408e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.498631 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-kube-api-access-gf42j" (OuterVolumeSpecName: "kube-api-access-gf42j") pod "0df50f81-ce7c-4e3c-9f29-4b5338d0408e" (UID: "0df50f81-ce7c-4e3c-9f29-4b5338d0408e"). InnerVolumeSpecName "kube-api-access-gf42j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.594675 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf42j\" (UniqueName: \"kubernetes.io/projected/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-kube-api-access-gf42j\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.594710 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:03 crc kubenswrapper[4767]: I1124 23:15:03.594720 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df50f81-ce7c-4e3c-9f29-4b5338d0408e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:04 crc kubenswrapper[4767]: I1124 23:15:04.030617 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" event={"ID":"0df50f81-ce7c-4e3c-9f29-4b5338d0408e","Type":"ContainerDied","Data":"581713d0c504bbf733880e3ada847ee11be48d41826eaef3cb2babf11d91c4c8"} Nov 24 23:15:04 crc kubenswrapper[4767]: I1124 23:15:04.030676 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="581713d0c504bbf733880e3ada847ee11be48d41826eaef3cb2babf11d91c4c8" Nov 24 23:15:04 crc kubenswrapper[4767]: I1124 23:15:04.030740 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-8spm7" Nov 24 23:15:04 crc kubenswrapper[4767]: I1124 23:15:04.514816 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df"] Nov 24 23:15:04 crc kubenswrapper[4767]: I1124 23:15:04.527246 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-jr9df"] Nov 24 23:15:06 crc kubenswrapper[4767]: I1124 23:15:06.327766 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aef0d5c-c571-45ad-80ca-21ca33e380cb" path="/var/lib/kubelet/pods/6aef0d5c-c571-45ad-80ca-21ca33e380cb/volumes" Nov 24 23:15:13 crc kubenswrapper[4767]: I1124 23:15:13.313604 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:15:13 crc kubenswrapper[4767]: E1124 23:15:13.314663 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:15:25 crc kubenswrapper[4767]: I1124 23:15:25.314002 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:15:25 crc kubenswrapper[4767]: E1124 23:15:25.316314 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:15:38 crc kubenswrapper[4767]: I1124 23:15:38.331845 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:15:38 crc kubenswrapper[4767]: E1124 23:15:38.332784 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:15:45 crc kubenswrapper[4767]: I1124 23:15:45.424239 4767 scope.go:117] "RemoveContainer" containerID="71840228ff7399671c094fc4ea3a0d64c8f471f67a4ec2bb0e5fe27b105e4157" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.000521 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9kcqq"] Nov 24 23:15:46 crc kubenswrapper[4767]: E1124 23:15:46.001409 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df50f81-ce7c-4e3c-9f29-4b5338d0408e" containerName="collect-profiles" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.001429 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df50f81-ce7c-4e3c-9f29-4b5338d0408e" containerName="collect-profiles" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.001705 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df50f81-ce7c-4e3c-9f29-4b5338d0408e" containerName="collect-profiles" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.003604 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.013166 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kcqq"] Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.073248 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtq4\" (UniqueName: \"kubernetes.io/projected/b0cee23f-4b95-4a7b-97d9-c596316d776d-kube-api-access-pmtq4\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.073364 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-catalog-content\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.073483 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-utilities\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.175398 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmtq4\" (UniqueName: \"kubernetes.io/projected/b0cee23f-4b95-4a7b-97d9-c596316d776d-kube-api-access-pmtq4\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.175511 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-catalog-content\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.175637 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-utilities\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.176253 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-utilities\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.176404 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-catalog-content\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.190955 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4cpbd"] Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.194163 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.196383 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmtq4\" (UniqueName: \"kubernetes.io/projected/b0cee23f-4b95-4a7b-97d9-c596316d776d-kube-api-access-pmtq4\") pod \"redhat-marketplace-9kcqq\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.205348 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cpbd"] Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.277854 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-catalog-content\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.278426 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qcfj\" (UniqueName: \"kubernetes.io/projected/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-kube-api-access-4qcfj\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.278510 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-utilities\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.331726 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.380172 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-catalog-content\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.380335 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qcfj\" (UniqueName: \"kubernetes.io/projected/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-kube-api-access-4qcfj\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.380356 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-utilities\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.380772 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-utilities\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.381221 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-catalog-content\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.401119 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qcfj\" (UniqueName: \"kubernetes.io/projected/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-kube-api-access-4qcfj\") pod \"redhat-operators-4cpbd\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.567858 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:46 crc kubenswrapper[4767]: I1124 23:15:46.823121 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kcqq"] Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.023819 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cpbd"] Nov 24 23:15:47 crc kubenswrapper[4767]: W1124 23:15:47.032523 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod535dfad5_30a1_4859_aa46_9ab0fc0ca2f0.slice/crio-0d604b82c13373dda95e9cf1f678537ab89d0a171b70c1fc3c70ed16bc15c7b6 WatchSource:0}: Error finding container 0d604b82c13373dda95e9cf1f678537ab89d0a171b70c1fc3c70ed16bc15c7b6: Status 404 returned error can't find the container with id 0d604b82c13373dda95e9cf1f678537ab89d0a171b70c1fc3c70ed16bc15c7b6 Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.522134 4767 generic.go:334] "Generic (PLEG): container finished" podID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerID="f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e" exitCode=0 Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.522183 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cpbd" event={"ID":"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0","Type":"ContainerDied","Data":"f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e"} Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.522553 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cpbd" event={"ID":"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0","Type":"ContainerStarted","Data":"0d604b82c13373dda95e9cf1f678537ab89d0a171b70c1fc3c70ed16bc15c7b6"} Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.524398 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.524638 4767 generic.go:334] "Generic (PLEG): container finished" podID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerID="27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778" exitCode=0 Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.524693 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kcqq" event={"ID":"b0cee23f-4b95-4a7b-97d9-c596316d776d","Type":"ContainerDied","Data":"27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778"} Nov 24 23:15:47 crc kubenswrapper[4767]: I1124 23:15:47.524729 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kcqq" event={"ID":"b0cee23f-4b95-4a7b-97d9-c596316d776d","Type":"ContainerStarted","Data":"0aed7d1e8788f713a740b070dc3389b85e5e0da614221e2d34d67095145dbe77"} Nov 24 23:15:48 crc kubenswrapper[4767]: I1124 23:15:48.536833 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kcqq" event={"ID":"b0cee23f-4b95-4a7b-97d9-c596316d776d","Type":"ContainerStarted","Data":"cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271"} Nov 24 23:15:48 crc kubenswrapper[4767]: I1124 23:15:48.544079 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cpbd" event={"ID":"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0","Type":"ContainerStarted","Data":"ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889"} Nov 24 23:15:49 crc kubenswrapper[4767]: I1124 23:15:49.560621 4767 generic.go:334] "Generic (PLEG): container finished" podID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerID="cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271" exitCode=0 Nov 24 23:15:49 crc kubenswrapper[4767]: I1124 23:15:49.560720 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kcqq" event={"ID":"b0cee23f-4b95-4a7b-97d9-c596316d776d","Type":"ContainerDied","Data":"cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271"} Nov 24 23:15:50 crc kubenswrapper[4767]: I1124 23:15:50.313444 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:15:50 crc kubenswrapper[4767]: E1124 23:15:50.314350 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:15:51 crc kubenswrapper[4767]: I1124 23:15:51.586196 4767 generic.go:334] "Generic (PLEG): container finished" podID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerID="ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889" exitCode=0 Nov 24 23:15:51 crc kubenswrapper[4767]: I1124 23:15:51.586306 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cpbd" event={"ID":"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0","Type":"ContainerDied","Data":"ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889"} Nov 24 23:15:51 crc kubenswrapper[4767]: I1124 23:15:51.590179 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kcqq" event={"ID":"b0cee23f-4b95-4a7b-97d9-c596316d776d","Type":"ContainerStarted","Data":"c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613"} Nov 24 23:15:51 crc kubenswrapper[4767]: I1124 23:15:51.638669 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9kcqq" podStartSLOduration=3.265736507 podStartE2EDuration="6.638647375s" podCreationTimestamp="2025-11-24 23:15:45 +0000 UTC" firstStartedPulling="2025-11-24 23:15:47.527105374 +0000 UTC m=+5830.444088746" lastFinishedPulling="2025-11-24 23:15:50.900016242 +0000 UTC m=+5833.816999614" observedRunningTime="2025-11-24 23:15:51.634201349 +0000 UTC m=+5834.551184731" watchObservedRunningTime="2025-11-24 23:15:51.638647375 +0000 UTC m=+5834.555630757" Nov 24 23:15:52 crc kubenswrapper[4767]: I1124 23:15:52.605455 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cpbd" event={"ID":"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0","Type":"ContainerStarted","Data":"2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602"} Nov 24 23:15:52 crc kubenswrapper[4767]: I1124 23:15:52.623627 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4cpbd" podStartSLOduration=2.094515766 podStartE2EDuration="6.623602352s" podCreationTimestamp="2025-11-24 23:15:46 +0000 UTC" firstStartedPulling="2025-11-24 23:15:47.52412376 +0000 UTC m=+5830.441107132" lastFinishedPulling="2025-11-24 23:15:52.053210336 +0000 UTC m=+5834.970193718" observedRunningTime="2025-11-24 23:15:52.620999518 +0000 UTC m=+5835.537982930" watchObservedRunningTime="2025-11-24 23:15:52.623602352 +0000 UTC m=+5835.540585754" Nov 24 23:15:56 crc kubenswrapper[4767]: I1124 23:15:56.332915 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:56 crc kubenswrapper[4767]: I1124 23:15:56.333344 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:56 crc kubenswrapper[4767]: I1124 23:15:56.387996 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:56 crc kubenswrapper[4767]: I1124 23:15:56.569220 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:56 crc kubenswrapper[4767]: I1124 23:15:56.569686 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:15:56 crc kubenswrapper[4767]: I1124 23:15:56.710320 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:57 crc kubenswrapper[4767]: I1124 23:15:57.385149 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kcqq"] Nov 24 23:15:57 crc kubenswrapper[4767]: I1124 23:15:57.645381 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4cpbd" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="registry-server" probeResult="failure" output=< Nov 24 23:15:57 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 23:15:57 crc kubenswrapper[4767]: > Nov 24 23:15:58 crc kubenswrapper[4767]: I1124 23:15:58.668583 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9kcqq" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="registry-server" containerID="cri-o://c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613" gracePeriod=2 Nov 24 23:15:58 crc kubenswrapper[4767]: E1124 23:15:58.943335 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0cee23f_4b95_4a7b_97d9_c596316d776d.slice/crio-conmon-c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0cee23f_4b95_4a7b_97d9_c596316d776d.slice/crio-c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613.scope\": RecentStats: unable to find data in memory cache]" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.213315 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.265586 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-utilities\") pod \"b0cee23f-4b95-4a7b-97d9-c596316d776d\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.265732 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmtq4\" (UniqueName: \"kubernetes.io/projected/b0cee23f-4b95-4a7b-97d9-c596316d776d-kube-api-access-pmtq4\") pod \"b0cee23f-4b95-4a7b-97d9-c596316d776d\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.265919 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-catalog-content\") pod \"b0cee23f-4b95-4a7b-97d9-c596316d776d\" (UID: \"b0cee23f-4b95-4a7b-97d9-c596316d776d\") " Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.266317 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-utilities" (OuterVolumeSpecName: "utilities") pod "b0cee23f-4b95-4a7b-97d9-c596316d776d" (UID: "b0cee23f-4b95-4a7b-97d9-c596316d776d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.266821 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.271661 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0cee23f-4b95-4a7b-97d9-c596316d776d-kube-api-access-pmtq4" (OuterVolumeSpecName: "kube-api-access-pmtq4") pod "b0cee23f-4b95-4a7b-97d9-c596316d776d" (UID: "b0cee23f-4b95-4a7b-97d9-c596316d776d"). InnerVolumeSpecName "kube-api-access-pmtq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.286496 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0cee23f-4b95-4a7b-97d9-c596316d776d" (UID: "b0cee23f-4b95-4a7b-97d9-c596316d776d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.369101 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmtq4\" (UniqueName: \"kubernetes.io/projected/b0cee23f-4b95-4a7b-97d9-c596316d776d-kube-api-access-pmtq4\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.369138 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cee23f-4b95-4a7b-97d9-c596316d776d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.690459 4767 generic.go:334] "Generic (PLEG): container finished" podID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerID="c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613" exitCode=0 Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.690505 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kcqq" event={"ID":"b0cee23f-4b95-4a7b-97d9-c596316d776d","Type":"ContainerDied","Data":"c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613"} Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.690535 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kcqq" event={"ID":"b0cee23f-4b95-4a7b-97d9-c596316d776d","Type":"ContainerDied","Data":"0aed7d1e8788f713a740b070dc3389b85e5e0da614221e2d34d67095145dbe77"} Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.690554 4767 scope.go:117] "RemoveContainer" containerID="c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.691581 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kcqq" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.725666 4767 scope.go:117] "RemoveContainer" containerID="cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.734194 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kcqq"] Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.747926 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kcqq"] Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.763418 4767 scope.go:117] "RemoveContainer" containerID="27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.808835 4767 scope.go:117] "RemoveContainer" containerID="c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613" Nov 24 23:15:59 crc kubenswrapper[4767]: E1124 23:15:59.809461 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613\": container with ID starting with c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613 not found: ID does not exist" containerID="c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.809509 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613"} err="failed to get container status \"c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613\": rpc error: code = NotFound desc = could not find container \"c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613\": container with ID starting with c5f6a158e11e9f401253027664564eb333bb0a42463c646f623e7456aeeeb613 not found: ID does not exist" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.809538 4767 scope.go:117] "RemoveContainer" containerID="cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271" Nov 24 23:15:59 crc kubenswrapper[4767]: E1124 23:15:59.810035 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271\": container with ID starting with cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271 not found: ID does not exist" containerID="cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.810074 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271"} err="failed to get container status \"cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271\": rpc error: code = NotFound desc = could not find container \"cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271\": container with ID starting with cb4f7be4b83e57aef13819758dbc4217d456bd22998b418415ed43bcc99d2271 not found: ID does not exist" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.810102 4767 scope.go:117] "RemoveContainer" containerID="27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778" Nov 24 23:15:59 crc kubenswrapper[4767]: E1124 23:15:59.810520 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778\": container with ID starting with 27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778 not found: ID does not exist" containerID="27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778" Nov 24 23:15:59 crc kubenswrapper[4767]: I1124 23:15:59.810547 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778"} err="failed to get container status \"27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778\": rpc error: code = NotFound desc = could not find container \"27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778\": container with ID starting with 27522879a97fd449aec3d4d60aeed2f1aafaaef8ca880d6dd04af1251c616778 not found: ID does not exist" Nov 24 23:16:00 crc kubenswrapper[4767]: I1124 23:16:00.328163 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" path="/var/lib/kubelet/pods/b0cee23f-4b95-4a7b-97d9-c596316d776d/volumes" Nov 24 23:16:05 crc kubenswrapper[4767]: I1124 23:16:05.313731 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:16:05 crc kubenswrapper[4767]: E1124 23:16:05.314480 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:16:06 crc kubenswrapper[4767]: I1124 23:16:06.642444 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:16:06 crc kubenswrapper[4767]: I1124 23:16:06.718834 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:16:06 crc kubenswrapper[4767]: I1124 23:16:06.897085 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cpbd"] Nov 24 23:16:07 crc kubenswrapper[4767]: I1124 23:16:07.793423 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4cpbd" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="registry-server" containerID="cri-o://2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602" gracePeriod=2 Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.327036 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.377551 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qcfj\" (UniqueName: \"kubernetes.io/projected/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-kube-api-access-4qcfj\") pod \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.377619 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-catalog-content\") pod \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.377682 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-utilities\") pod \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\" (UID: \"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0\") " Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.379168 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-utilities" (OuterVolumeSpecName: "utilities") pod "535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" (UID: "535dfad5-30a1-4859-aa46-9ab0fc0ca2f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.391610 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-kube-api-access-4qcfj" (OuterVolumeSpecName: "kube-api-access-4qcfj") pod "535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" (UID: "535dfad5-30a1-4859-aa46-9ab0fc0ca2f0"). InnerVolumeSpecName "kube-api-access-4qcfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.480684 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qcfj\" (UniqueName: \"kubernetes.io/projected/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-kube-api-access-4qcfj\") on node \"crc\" DevicePath \"\"" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.480737 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.501236 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" (UID: "535dfad5-30a1-4859-aa46-9ab0fc0ca2f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.582001 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.806036 4767 generic.go:334] "Generic (PLEG): container finished" podID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerID="2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602" exitCode=0 Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.806080 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cpbd" event={"ID":"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0","Type":"ContainerDied","Data":"2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602"} Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.806111 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cpbd" event={"ID":"535dfad5-30a1-4859-aa46-9ab0fc0ca2f0","Type":"ContainerDied","Data":"0d604b82c13373dda95e9cf1f678537ab89d0a171b70c1fc3c70ed16bc15c7b6"} Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.806119 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cpbd" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.806131 4767 scope.go:117] "RemoveContainer" containerID="2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.843733 4767 scope.go:117] "RemoveContainer" containerID="ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.873334 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cpbd"] Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.877026 4767 scope.go:117] "RemoveContainer" containerID="f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.888939 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4cpbd"] Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.918608 4767 scope.go:117] "RemoveContainer" containerID="2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602" Nov 24 23:16:08 crc kubenswrapper[4767]: E1124 23:16:08.919335 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602\": container with ID starting with 2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602 not found: ID does not exist" containerID="2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.919482 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602"} err="failed to get container status \"2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602\": rpc error: code = NotFound desc = could not find container \"2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602\": container with ID starting with 2b264fec583a9f314aea9f4974101c9ab49a965a828d8862ab2eb3ad47361602 not found: ID does not exist" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.919573 4767 scope.go:117] "RemoveContainer" containerID="ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889" Nov 24 23:16:08 crc kubenswrapper[4767]: E1124 23:16:08.920220 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889\": container with ID starting with ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889 not found: ID does not exist" containerID="ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.920299 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889"} err="failed to get container status \"ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889\": rpc error: code = NotFound desc = could not find container \"ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889\": container with ID starting with ec04d8787d7b6d9850b13d5fc7623dd7fe6488ca2099d0c43c2f4832e1415889 not found: ID does not exist" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.920341 4767 scope.go:117] "RemoveContainer" containerID="f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e" Nov 24 23:16:08 crc kubenswrapper[4767]: E1124 23:16:08.921028 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e\": container with ID starting with f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e not found: ID does not exist" containerID="f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e" Nov 24 23:16:08 crc kubenswrapper[4767]: I1124 23:16:08.921071 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e"} err="failed to get container status \"f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e\": rpc error: code = NotFound desc = could not find container \"f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e\": container with ID starting with f72368c5c49fa4856f758c38dd953e3e5dfef853447f65ef2f3eaae8d2f3b53e not found: ID does not exist" Nov 24 23:16:10 crc kubenswrapper[4767]: I1124 23:16:10.328855 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" path="/var/lib/kubelet/pods/535dfad5-30a1-4859-aa46-9ab0fc0ca2f0/volumes" Nov 24 23:16:16 crc kubenswrapper[4767]: I1124 23:16:16.314712 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:16:16 crc kubenswrapper[4767]: E1124 23:16:16.315783 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:16:31 crc kubenswrapper[4767]: I1124 23:16:31.313177 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:16:31 crc kubenswrapper[4767]: E1124 23:16:31.314718 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:16:45 crc kubenswrapper[4767]: I1124 23:16:45.313818 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:16:45 crc kubenswrapper[4767]: E1124 23:16:45.314524 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:16:45 crc kubenswrapper[4767]: I1124 23:16:45.522454 4767 scope.go:117] "RemoveContainer" containerID="206bed68547d9677bd82f6c4358ba56aaca321f0a05a8f7db14364d3ca6f76ef" Nov 24 23:16:45 crc kubenswrapper[4767]: I1124 23:16:45.558404 4767 scope.go:117] "RemoveContainer" containerID="6402831d61bf996d86695687891c4735d6c54ede2ad4c7a650929a5b5d22e26c" Nov 24 23:16:45 crc kubenswrapper[4767]: I1124 23:16:45.603480 4767 scope.go:117] "RemoveContainer" containerID="fab7c5034a399cde485f371dcec8586c1f194880c781f8c511cf8210e5fa48b1" Nov 24 23:17:00 crc kubenswrapper[4767]: I1124 23:17:00.314049 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:17:00 crc kubenswrapper[4767]: E1124 23:17:00.314924 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:17:13 crc kubenswrapper[4767]: I1124 23:17:13.314189 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:17:13 crc kubenswrapper[4767]: E1124 23:17:13.318173 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:17:27 crc kubenswrapper[4767]: I1124 23:17:27.313776 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:17:27 crc kubenswrapper[4767]: E1124 23:17:27.314790 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:17:40 crc kubenswrapper[4767]: I1124 23:17:40.313930 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:17:40 crc kubenswrapper[4767]: E1124 23:17:40.315013 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:17:55 crc kubenswrapper[4767]: I1124 23:17:55.313388 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:17:55 crc kubenswrapper[4767]: E1124 23:17:55.315479 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:18:06 crc kubenswrapper[4767]: I1124 23:18:06.313586 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:18:07 crc kubenswrapper[4767]: I1124 23:18:07.224460 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"ee5d6144976e6d6bf323edd8016e7c3d9fb8460308509f1132970a1086251b68"} Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.280638 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7zdf4"] Nov 24 23:18:10 crc kubenswrapper[4767]: E1124 23:18:10.281800 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="registry-server" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.281825 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="registry-server" Nov 24 23:18:10 crc kubenswrapper[4767]: E1124 23:18:10.281851 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="extract-utilities" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.281863 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="extract-utilities" Nov 24 23:18:10 crc kubenswrapper[4767]: E1124 23:18:10.281918 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="extract-content" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.281931 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="extract-content" Nov 24 23:18:10 crc kubenswrapper[4767]: E1124 23:18:10.281959 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="registry-server" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.281970 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="registry-server" Nov 24 23:18:10 crc kubenswrapper[4767]: E1124 23:18:10.282001 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="extract-content" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.282013 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="extract-content" Nov 24 23:18:10 crc kubenswrapper[4767]: E1124 23:18:10.282034 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="extract-utilities" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.282042 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="extract-utilities" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.282373 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="535dfad5-30a1-4859-aa46-9ab0fc0ca2f0" containerName="registry-server" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.282403 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cee23f-4b95-4a7b-97d9-c596316d776d" containerName="registry-server" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.284423 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.295185 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zdf4"] Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.464924 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-catalog-content\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.465235 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrwg\" (UniqueName: \"kubernetes.io/projected/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-kube-api-access-jmrwg\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.465516 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-utilities\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.566855 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-utilities\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.566978 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-catalog-content\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.567030 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmrwg\" (UniqueName: \"kubernetes.io/projected/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-kube-api-access-jmrwg\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.567647 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-utilities\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.567739 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-catalog-content\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.602474 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmrwg\" (UniqueName: \"kubernetes.io/projected/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-kube-api-access-jmrwg\") pod \"certified-operators-7zdf4\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:10 crc kubenswrapper[4767]: I1124 23:18:10.615240 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:11 crc kubenswrapper[4767]: W1124 23:18:11.116363 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f1c0a2_95bb_4420_a257_44bf27a99c8f.slice/crio-65fe2a2688bcefd6ae5093fdf86a76b81edb9cb5869bf2ac2069eca9b1731c73 WatchSource:0}: Error finding container 65fe2a2688bcefd6ae5093fdf86a76b81edb9cb5869bf2ac2069eca9b1731c73: Status 404 returned error can't find the container with id 65fe2a2688bcefd6ae5093fdf86a76b81edb9cb5869bf2ac2069eca9b1731c73 Nov 24 23:18:11 crc kubenswrapper[4767]: I1124 23:18:11.119467 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zdf4"] Nov 24 23:18:11 crc kubenswrapper[4767]: I1124 23:18:11.274332 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zdf4" event={"ID":"f9f1c0a2-95bb-4420-a257-44bf27a99c8f","Type":"ContainerStarted","Data":"65fe2a2688bcefd6ae5093fdf86a76b81edb9cb5869bf2ac2069eca9b1731c73"} Nov 24 23:18:12 crc kubenswrapper[4767]: I1124 23:18:12.291613 4767 generic.go:334] "Generic (PLEG): container finished" podID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerID="51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52" exitCode=0 Nov 24 23:18:12 crc kubenswrapper[4767]: I1124 23:18:12.291784 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zdf4" event={"ID":"f9f1c0a2-95bb-4420-a257-44bf27a99c8f","Type":"ContainerDied","Data":"51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52"} Nov 24 23:18:13 crc kubenswrapper[4767]: I1124 23:18:13.301452 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zdf4" event={"ID":"f9f1c0a2-95bb-4420-a257-44bf27a99c8f","Type":"ContainerStarted","Data":"eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d"} Nov 24 23:18:14 crc kubenswrapper[4767]: I1124 23:18:14.315574 4767 generic.go:334] "Generic (PLEG): container finished" podID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerID="eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d" exitCode=0 Nov 24 23:18:14 crc kubenswrapper[4767]: I1124 23:18:14.330302 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zdf4" event={"ID":"f9f1c0a2-95bb-4420-a257-44bf27a99c8f","Type":"ContainerDied","Data":"eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d"} Nov 24 23:18:15 crc kubenswrapper[4767]: I1124 23:18:15.329701 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zdf4" event={"ID":"f9f1c0a2-95bb-4420-a257-44bf27a99c8f","Type":"ContainerStarted","Data":"53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7"} Nov 24 23:18:15 crc kubenswrapper[4767]: I1124 23:18:15.355430 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7zdf4" podStartSLOduration=2.703614395 podStartE2EDuration="5.355403605s" podCreationTimestamp="2025-11-24 23:18:10 +0000 UTC" firstStartedPulling="2025-11-24 23:18:12.295242909 +0000 UTC m=+5975.212226321" lastFinishedPulling="2025-11-24 23:18:14.947032149 +0000 UTC m=+5977.864015531" observedRunningTime="2025-11-24 23:18:15.352580685 +0000 UTC m=+5978.269564137" watchObservedRunningTime="2025-11-24 23:18:15.355403605 +0000 UTC m=+5978.272387017" Nov 24 23:18:20 crc kubenswrapper[4767]: I1124 23:18:20.616588 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:20 crc kubenswrapper[4767]: I1124 23:18:20.617507 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:20 crc kubenswrapper[4767]: I1124 23:18:20.697534 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:21 crc kubenswrapper[4767]: I1124 23:18:21.496429 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:21 crc kubenswrapper[4767]: I1124 23:18:21.568756 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zdf4"] Nov 24 23:18:23 crc kubenswrapper[4767]: I1124 23:18:23.430130 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7zdf4" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="registry-server" containerID="cri-o://53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7" gracePeriod=2 Nov 24 23:18:23 crc kubenswrapper[4767]: I1124 23:18:23.925845 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.052263 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmrwg\" (UniqueName: \"kubernetes.io/projected/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-kube-api-access-jmrwg\") pod \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.052432 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-utilities\") pod \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.052487 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-catalog-content\") pod \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\" (UID: \"f9f1c0a2-95bb-4420-a257-44bf27a99c8f\") " Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.054056 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-utilities" (OuterVolumeSpecName: "utilities") pod "f9f1c0a2-95bb-4420-a257-44bf27a99c8f" (UID: "f9f1c0a2-95bb-4420-a257-44bf27a99c8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.061545 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-kube-api-access-jmrwg" (OuterVolumeSpecName: "kube-api-access-jmrwg") pod "f9f1c0a2-95bb-4420-a257-44bf27a99c8f" (UID: "f9f1c0a2-95bb-4420-a257-44bf27a99c8f"). InnerVolumeSpecName "kube-api-access-jmrwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.114571 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9f1c0a2-95bb-4420-a257-44bf27a99c8f" (UID: "f9f1c0a2-95bb-4420-a257-44bf27a99c8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.154786 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmrwg\" (UniqueName: \"kubernetes.io/projected/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-kube-api-access-jmrwg\") on node \"crc\" DevicePath \"\"" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.154815 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.154827 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f1c0a2-95bb-4420-a257-44bf27a99c8f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.445947 4767 generic.go:334] "Generic (PLEG): container finished" podID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerID="53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7" exitCode=0 Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.446063 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zdf4" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.446124 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zdf4" event={"ID":"f9f1c0a2-95bb-4420-a257-44bf27a99c8f","Type":"ContainerDied","Data":"53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7"} Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.446579 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zdf4" event={"ID":"f9f1c0a2-95bb-4420-a257-44bf27a99c8f","Type":"ContainerDied","Data":"65fe2a2688bcefd6ae5093fdf86a76b81edb9cb5869bf2ac2069eca9b1731c73"} Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.446618 4767 scope.go:117] "RemoveContainer" containerID="53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.485049 4767 scope.go:117] "RemoveContainer" containerID="eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.497119 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zdf4"] Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.511161 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7zdf4"] Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.519535 4767 scope.go:117] "RemoveContainer" containerID="51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.564900 4767 scope.go:117] "RemoveContainer" containerID="53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7" Nov 24 23:18:24 crc kubenswrapper[4767]: E1124 23:18:24.565800 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7\": container with ID starting with 53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7 not found: ID does not exist" containerID="53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.565870 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7"} err="failed to get container status \"53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7\": rpc error: code = NotFound desc = could not find container \"53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7\": container with ID starting with 53b03799a3b4afb7eb0f6d09465b435bfa6defbf136a2493d7922d5065a6d4e7 not found: ID does not exist" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.565910 4767 scope.go:117] "RemoveContainer" containerID="eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d" Nov 24 23:18:24 crc kubenswrapper[4767]: E1124 23:18:24.566480 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d\": container with ID starting with eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d not found: ID does not exist" containerID="eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.566504 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d"} err="failed to get container status \"eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d\": rpc error: code = NotFound desc = could not find container \"eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d\": container with ID starting with eae82e20022eee1d0103b8e0e6240cdcb68e5d598ddfc4922fcba7a086564e9d not found: ID does not exist" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.566520 4767 scope.go:117] "RemoveContainer" containerID="51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52" Nov 24 23:18:24 crc kubenswrapper[4767]: E1124 23:18:24.566881 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52\": container with ID starting with 51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52 not found: ID does not exist" containerID="51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52" Nov 24 23:18:24 crc kubenswrapper[4767]: I1124 23:18:24.566940 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52"} err="failed to get container status \"51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52\": rpc error: code = NotFound desc = could not find container \"51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52\": container with ID starting with 51d69cdfc30e36695bb14040050b35f0f48407bc951a98a1e41a6399bedb9f52 not found: ID does not exist" Nov 24 23:18:26 crc kubenswrapper[4767]: I1124 23:18:26.331789 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" path="/var/lib/kubelet/pods/f9f1c0a2-95bb-4420-a257-44bf27a99c8f/volumes" Nov 24 23:20:35 crc kubenswrapper[4767]: I1124 23:20:35.481503 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:20:35 crc kubenswrapper[4767]: I1124 23:20:35.482250 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:21:05 crc kubenswrapper[4767]: I1124 23:21:05.481954 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:21:05 crc kubenswrapper[4767]: I1124 23:21:05.482846 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.481555 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.482530 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.482612 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.483628 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee5d6144976e6d6bf323edd8016e7c3d9fb8460308509f1132970a1086251b68"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.483701 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://ee5d6144976e6d6bf323edd8016e7c3d9fb8460308509f1132970a1086251b68" gracePeriod=600 Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.768622 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="ee5d6144976e6d6bf323edd8016e7c3d9fb8460308509f1132970a1086251b68" exitCode=0 Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.768693 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"ee5d6144976e6d6bf323edd8016e7c3d9fb8460308509f1132970a1086251b68"} Nov 24 23:21:35 crc kubenswrapper[4767]: I1124 23:21:35.768997 4767 scope.go:117] "RemoveContainer" containerID="b13956fe29f599ae6d6d1658d2694453c197cf454453f5a139df2090056fe6f0" Nov 24 23:21:36 crc kubenswrapper[4767]: I1124 23:21:36.780939 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200"} Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.040554 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-77vgn"] Nov 24 23:22:07 crc kubenswrapper[4767]: E1124 23:22:07.041702 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="extract-content" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.041718 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="extract-content" Nov 24 23:22:07 crc kubenswrapper[4767]: E1124 23:22:07.041733 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="extract-utilities" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.041742 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="extract-utilities" Nov 24 23:22:07 crc kubenswrapper[4767]: E1124 23:22:07.041787 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="registry-server" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.041795 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="registry-server" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.042033 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f1c0a2-95bb-4420-a257-44bf27a99c8f" containerName="registry-server" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.046304 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.075872 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77vgn"] Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.155605 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7ljf\" (UniqueName: \"kubernetes.io/projected/da369bde-9e80-47c3-a83b-a3271b413828-kube-api-access-z7ljf\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.155703 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-utilities\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.155740 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-catalog-content\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.257645 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-utilities\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.257726 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-catalog-content\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.257956 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7ljf\" (UniqueName: \"kubernetes.io/projected/da369bde-9e80-47c3-a83b-a3271b413828-kube-api-access-z7ljf\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.258173 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-utilities\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.258764 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-catalog-content\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.282127 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7ljf\" (UniqueName: \"kubernetes.io/projected/da369bde-9e80-47c3-a83b-a3271b413828-kube-api-access-z7ljf\") pod \"community-operators-77vgn\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.371217 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:07 crc kubenswrapper[4767]: I1124 23:22:07.837412 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77vgn"] Nov 24 23:22:07 crc kubenswrapper[4767]: W1124 23:22:07.844462 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda369bde_9e80_47c3_a83b_a3271b413828.slice/crio-f0fad92527c70d292836bf49660d4d0f0bc8bf3c344bf66726f3b9f028556190 WatchSource:0}: Error finding container f0fad92527c70d292836bf49660d4d0f0bc8bf3c344bf66726f3b9f028556190: Status 404 returned error can't find the container with id f0fad92527c70d292836bf49660d4d0f0bc8bf3c344bf66726f3b9f028556190 Nov 24 23:22:08 crc kubenswrapper[4767]: I1124 23:22:08.138045 4767 generic.go:334] "Generic (PLEG): container finished" podID="da369bde-9e80-47c3-a83b-a3271b413828" containerID="10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5" exitCode=0 Nov 24 23:22:08 crc kubenswrapper[4767]: I1124 23:22:08.138090 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77vgn" event={"ID":"da369bde-9e80-47c3-a83b-a3271b413828","Type":"ContainerDied","Data":"10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5"} Nov 24 23:22:08 crc kubenswrapper[4767]: I1124 23:22:08.138128 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77vgn" event={"ID":"da369bde-9e80-47c3-a83b-a3271b413828","Type":"ContainerStarted","Data":"f0fad92527c70d292836bf49660d4d0f0bc8bf3c344bf66726f3b9f028556190"} Nov 24 23:22:08 crc kubenswrapper[4767]: I1124 23:22:08.140060 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:22:09 crc kubenswrapper[4767]: I1124 23:22:09.150365 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77vgn" event={"ID":"da369bde-9e80-47c3-a83b-a3271b413828","Type":"ContainerStarted","Data":"271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3"} Nov 24 23:22:10 crc kubenswrapper[4767]: I1124 23:22:10.166652 4767 generic.go:334] "Generic (PLEG): container finished" podID="da369bde-9e80-47c3-a83b-a3271b413828" containerID="271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3" exitCode=0 Nov 24 23:22:10 crc kubenswrapper[4767]: I1124 23:22:10.166741 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77vgn" event={"ID":"da369bde-9e80-47c3-a83b-a3271b413828","Type":"ContainerDied","Data":"271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3"} Nov 24 23:22:11 crc kubenswrapper[4767]: I1124 23:22:11.184822 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77vgn" event={"ID":"da369bde-9e80-47c3-a83b-a3271b413828","Type":"ContainerStarted","Data":"d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b"} Nov 24 23:22:11 crc kubenswrapper[4767]: I1124 23:22:11.215586 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-77vgn" podStartSLOduration=1.678232767 podStartE2EDuration="4.215561861s" podCreationTimestamp="2025-11-24 23:22:07 +0000 UTC" firstStartedPulling="2025-11-24 23:22:08.139820925 +0000 UTC m=+6211.056804297" lastFinishedPulling="2025-11-24 23:22:10.677149969 +0000 UTC m=+6213.594133391" observedRunningTime="2025-11-24 23:22:11.209362166 +0000 UTC m=+6214.126345538" watchObservedRunningTime="2025-11-24 23:22:11.215561861 +0000 UTC m=+6214.132545263" Nov 24 23:22:17 crc kubenswrapper[4767]: I1124 23:22:17.372457 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:17 crc kubenswrapper[4767]: I1124 23:22:17.373249 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:17 crc kubenswrapper[4767]: I1124 23:22:17.444950 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:18 crc kubenswrapper[4767]: I1124 23:22:18.333137 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:18 crc kubenswrapper[4767]: I1124 23:22:18.393703 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77vgn"] Nov 24 23:22:20 crc kubenswrapper[4767]: I1124 23:22:20.297412 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-77vgn" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="registry-server" containerID="cri-o://d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b" gracePeriod=2 Nov 24 23:22:20 crc kubenswrapper[4767]: I1124 23:22:20.863006 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.614440 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-catalog-content\") pod \"da369bde-9e80-47c3-a83b-a3271b413828\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.614552 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7ljf\" (UniqueName: \"kubernetes.io/projected/da369bde-9e80-47c3-a83b-a3271b413828-kube-api-access-z7ljf\") pod \"da369bde-9e80-47c3-a83b-a3271b413828\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.614782 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-utilities\") pod \"da369bde-9e80-47c3-a83b-a3271b413828\" (UID: \"da369bde-9e80-47c3-a83b-a3271b413828\") " Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.616622 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-utilities" (OuterVolumeSpecName: "utilities") pod "da369bde-9e80-47c3-a83b-a3271b413828" (UID: "da369bde-9e80-47c3-a83b-a3271b413828"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.631802 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da369bde-9e80-47c3-a83b-a3271b413828-kube-api-access-z7ljf" (OuterVolumeSpecName: "kube-api-access-z7ljf") pod "da369bde-9e80-47c3-a83b-a3271b413828" (UID: "da369bde-9e80-47c3-a83b-a3271b413828"). InnerVolumeSpecName "kube-api-access-z7ljf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.656029 4767 generic.go:334] "Generic (PLEG): container finished" podID="da369bde-9e80-47c3-a83b-a3271b413828" containerID="d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b" exitCode=0 Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.656073 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77vgn" event={"ID":"da369bde-9e80-47c3-a83b-a3271b413828","Type":"ContainerDied","Data":"d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b"} Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.656100 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77vgn" event={"ID":"da369bde-9e80-47c3-a83b-a3271b413828","Type":"ContainerDied","Data":"f0fad92527c70d292836bf49660d4d0f0bc8bf3c344bf66726f3b9f028556190"} Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.656117 4767 scope.go:117] "RemoveContainer" containerID="d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.656258 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77vgn" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.687091 4767 scope.go:117] "RemoveContainer" containerID="271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.700867 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da369bde-9e80-47c3-a83b-a3271b413828" (UID: "da369bde-9e80-47c3-a83b-a3271b413828"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.713196 4767 scope.go:117] "RemoveContainer" containerID="10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.718042 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.718068 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da369bde-9e80-47c3-a83b-a3271b413828-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.718079 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7ljf\" (UniqueName: \"kubernetes.io/projected/da369bde-9e80-47c3-a83b-a3271b413828-kube-api-access-z7ljf\") on node \"crc\" DevicePath \"\"" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.749415 4767 scope.go:117] "RemoveContainer" containerID="d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b" Nov 24 23:22:21 crc kubenswrapper[4767]: E1124 23:22:21.749917 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b\": container with ID starting with d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b not found: ID does not exist" containerID="d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.749961 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b"} err="failed to get container status \"d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b\": rpc error: code = NotFound desc = could not find container \"d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b\": container with ID starting with d79b98532f847edc8ea08d84d655c9d2245a1557020f1361dbfb6087d6f2e33b not found: ID does not exist" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.749995 4767 scope.go:117] "RemoveContainer" containerID="271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3" Nov 24 23:22:21 crc kubenswrapper[4767]: E1124 23:22:21.750493 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3\": container with ID starting with 271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3 not found: ID does not exist" containerID="271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.750549 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3"} err="failed to get container status \"271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3\": rpc error: code = NotFound desc = could not find container \"271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3\": container with ID starting with 271dac7171bed0ae0c4949487ff0a527dc775c4e4ad81a36ca5e54d927fedae3 not found: ID does not exist" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.750583 4767 scope.go:117] "RemoveContainer" containerID="10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5" Nov 24 23:22:21 crc kubenswrapper[4767]: E1124 23:22:21.750898 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5\": container with ID starting with 10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5 not found: ID does not exist" containerID="10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5" Nov 24 23:22:21 crc kubenswrapper[4767]: I1124 23:22:21.750927 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5"} err="failed to get container status \"10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5\": rpc error: code = NotFound desc = could not find container \"10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5\": container with ID starting with 10c8a62885165ae0bdb3de09334e0323ff063b6bc31d163ee4bfea0da62489d5 not found: ID does not exist" Nov 24 23:22:22 crc kubenswrapper[4767]: I1124 23:22:22.002075 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77vgn"] Nov 24 23:22:22 crc kubenswrapper[4767]: I1124 23:22:22.016127 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-77vgn"] Nov 24 23:22:22 crc kubenswrapper[4767]: I1124 23:22:22.332664 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da369bde-9e80-47c3-a83b-a3271b413828" path="/var/lib/kubelet/pods/da369bde-9e80-47c3-a83b-a3271b413828/volumes" Nov 24 23:23:35 crc kubenswrapper[4767]: I1124 23:23:35.481620 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:23:35 crc kubenswrapper[4767]: I1124 23:23:35.483385 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:24:05 crc kubenswrapper[4767]: I1124 23:24:05.482121 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:24:05 crc kubenswrapper[4767]: I1124 23:24:05.482904 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:24:35 crc kubenswrapper[4767]: I1124 23:24:35.481539 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:24:35 crc kubenswrapper[4767]: I1124 23:24:35.482198 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:24:35 crc kubenswrapper[4767]: I1124 23:24:35.482258 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 23:24:35 crc kubenswrapper[4767]: I1124 23:24:35.483459 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:24:35 crc kubenswrapper[4767]: I1124 23:24:35.483707 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" gracePeriod=600 Nov 24 23:24:35 crc kubenswrapper[4767]: E1124 23:24:35.615547 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:24:36 crc kubenswrapper[4767]: I1124 23:24:36.304593 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" exitCode=0 Nov 24 23:24:36 crc kubenswrapper[4767]: I1124 23:24:36.304666 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200"} Nov 24 23:24:36 crc kubenswrapper[4767]: I1124 23:24:36.304751 4767 scope.go:117] "RemoveContainer" containerID="ee5d6144976e6d6bf323edd8016e7c3d9fb8460308509f1132970a1086251b68" Nov 24 23:24:36 crc kubenswrapper[4767]: I1124 23:24:36.305839 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:24:36 crc kubenswrapper[4767]: E1124 23:24:36.306430 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:24:49 crc kubenswrapper[4767]: I1124 23:24:49.313250 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:24:49 crc kubenswrapper[4767]: E1124 23:24:49.314119 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:25:04 crc kubenswrapper[4767]: I1124 23:25:04.314968 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:25:04 crc kubenswrapper[4767]: E1124 23:25:04.316483 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:25:15 crc kubenswrapper[4767]: I1124 23:25:15.313436 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:25:15 crc kubenswrapper[4767]: E1124 23:25:15.315840 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:25:27 crc kubenswrapper[4767]: I1124 23:25:27.313693 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:25:27 crc kubenswrapper[4767]: E1124 23:25:27.314975 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:25:40 crc kubenswrapper[4767]: I1124 23:25:40.313599 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:25:40 crc kubenswrapper[4767]: E1124 23:25:40.315217 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:25:52 crc kubenswrapper[4767]: I1124 23:25:52.314107 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:25:52 crc kubenswrapper[4767]: E1124 23:25:52.315189 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:26:03 crc kubenswrapper[4767]: I1124 23:26:03.314596 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:26:03 crc kubenswrapper[4767]: E1124 23:26:03.315694 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.522903 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsq8k"] Nov 24 23:26:12 crc kubenswrapper[4767]: E1124 23:26:12.524435 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="extract-utilities" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.524458 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="extract-utilities" Nov 24 23:26:12 crc kubenswrapper[4767]: E1124 23:26:12.524489 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="registry-server" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.524501 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="registry-server" Nov 24 23:26:12 crc kubenswrapper[4767]: E1124 23:26:12.524546 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="extract-content" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.524559 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="extract-content" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.524903 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="da369bde-9e80-47c3-a83b-a3271b413828" containerName="registry-server" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.527432 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.549605 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsq8k"] Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.577846 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-catalog-content\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.578076 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htz96\" (UniqueName: \"kubernetes.io/projected/951043a2-c60a-49dd-80c4-4026f3f9b1e9-kube-api-access-htz96\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.578219 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-utilities\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.679791 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-catalog-content\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.679885 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htz96\" (UniqueName: \"kubernetes.io/projected/951043a2-c60a-49dd-80c4-4026f3f9b1e9-kube-api-access-htz96\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.679924 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-utilities\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.680477 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-utilities\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.680770 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-catalog-content\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.703729 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htz96\" (UniqueName: \"kubernetes.io/projected/951043a2-c60a-49dd-80c4-4026f3f9b1e9-kube-api-access-htz96\") pod \"redhat-operators-wsq8k\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:12 crc kubenswrapper[4767]: I1124 23:26:12.857909 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:13 crc kubenswrapper[4767]: I1124 23:26:13.330610 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsq8k"] Nov 24 23:26:13 crc kubenswrapper[4767]: I1124 23:26:13.510205 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsq8k" event={"ID":"951043a2-c60a-49dd-80c4-4026f3f9b1e9","Type":"ContainerStarted","Data":"976dc50f57c5d4f0f14a7fb27da00172a6191576aaa8249c47862e4822d45cea"} Nov 24 23:26:14 crc kubenswrapper[4767]: I1124 23:26:14.525202 4767 generic.go:334] "Generic (PLEG): container finished" podID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerID="08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9" exitCode=0 Nov 24 23:26:14 crc kubenswrapper[4767]: I1124 23:26:14.525356 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsq8k" event={"ID":"951043a2-c60a-49dd-80c4-4026f3f9b1e9","Type":"ContainerDied","Data":"08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9"} Nov 24 23:26:15 crc kubenswrapper[4767]: I1124 23:26:15.540619 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsq8k" event={"ID":"951043a2-c60a-49dd-80c4-4026f3f9b1e9","Type":"ContainerStarted","Data":"7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9"} Nov 24 23:26:16 crc kubenswrapper[4767]: I1124 23:26:16.559079 4767 generic.go:334] "Generic (PLEG): container finished" podID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerID="7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9" exitCode=0 Nov 24 23:26:16 crc kubenswrapper[4767]: I1124 23:26:16.559170 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsq8k" event={"ID":"951043a2-c60a-49dd-80c4-4026f3f9b1e9","Type":"ContainerDied","Data":"7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9"} Nov 24 23:26:17 crc kubenswrapper[4767]: I1124 23:26:17.572088 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsq8k" event={"ID":"951043a2-c60a-49dd-80c4-4026f3f9b1e9","Type":"ContainerStarted","Data":"c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96"} Nov 24 23:26:17 crc kubenswrapper[4767]: I1124 23:26:17.594941 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsq8k" podStartSLOduration=3.049917199 podStartE2EDuration="5.59492037s" podCreationTimestamp="2025-11-24 23:26:12 +0000 UTC" firstStartedPulling="2025-11-24 23:26:14.52789814 +0000 UTC m=+6457.444881522" lastFinishedPulling="2025-11-24 23:26:17.072901301 +0000 UTC m=+6459.989884693" observedRunningTime="2025-11-24 23:26:17.591187564 +0000 UTC m=+6460.508171006" watchObservedRunningTime="2025-11-24 23:26:17.59492037 +0000 UTC m=+6460.511903752" Nov 24 23:26:18 crc kubenswrapper[4767]: I1124 23:26:18.319257 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:26:18 crc kubenswrapper[4767]: E1124 23:26:18.319537 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:26:22 crc kubenswrapper[4767]: I1124 23:26:22.858420 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:22 crc kubenswrapper[4767]: I1124 23:26:22.859039 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:23 crc kubenswrapper[4767]: I1124 23:26:23.942986 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsq8k" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="registry-server" probeResult="failure" output=< Nov 24 23:26:23 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Nov 24 23:26:23 crc kubenswrapper[4767]: > Nov 24 23:26:32 crc kubenswrapper[4767]: I1124 23:26:32.934750 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:32 crc kubenswrapper[4767]: I1124 23:26:32.979040 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:33 crc kubenswrapper[4767]: I1124 23:26:33.178173 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsq8k"] Nov 24 23:26:33 crc kubenswrapper[4767]: I1124 23:26:33.313549 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:26:33 crc kubenswrapper[4767]: E1124 23:26:33.314134 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:26:34 crc kubenswrapper[4767]: I1124 23:26:34.792900 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsq8k" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="registry-server" containerID="cri-o://c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96" gracePeriod=2 Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.377500 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.503254 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-catalog-content\") pod \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.503528 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-utilities\") pod \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.503574 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htz96\" (UniqueName: \"kubernetes.io/projected/951043a2-c60a-49dd-80c4-4026f3f9b1e9-kube-api-access-htz96\") pod \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\" (UID: \"951043a2-c60a-49dd-80c4-4026f3f9b1e9\") " Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.504573 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-utilities" (OuterVolumeSpecName: "utilities") pod "951043a2-c60a-49dd-80c4-4026f3f9b1e9" (UID: "951043a2-c60a-49dd-80c4-4026f3f9b1e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.512332 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951043a2-c60a-49dd-80c4-4026f3f9b1e9-kube-api-access-htz96" (OuterVolumeSpecName: "kube-api-access-htz96") pod "951043a2-c60a-49dd-80c4-4026f3f9b1e9" (UID: "951043a2-c60a-49dd-80c4-4026f3f9b1e9"). InnerVolumeSpecName "kube-api-access-htz96". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.600328 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "951043a2-c60a-49dd-80c4-4026f3f9b1e9" (UID: "951043a2-c60a-49dd-80c4-4026f3f9b1e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.607364 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.607415 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htz96\" (UniqueName: \"kubernetes.io/projected/951043a2-c60a-49dd-80c4-4026f3f9b1e9-kube-api-access-htz96\") on node \"crc\" DevicePath \"\"" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.607435 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951043a2-c60a-49dd-80c4-4026f3f9b1e9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.807030 4767 generic.go:334] "Generic (PLEG): container finished" podID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerID="c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96" exitCode=0 Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.807104 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsq8k" event={"ID":"951043a2-c60a-49dd-80c4-4026f3f9b1e9","Type":"ContainerDied","Data":"c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96"} Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.807136 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsq8k" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.807168 4767 scope.go:117] "RemoveContainer" containerID="c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.807149 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsq8k" event={"ID":"951043a2-c60a-49dd-80c4-4026f3f9b1e9","Type":"ContainerDied","Data":"976dc50f57c5d4f0f14a7fb27da00172a6191576aaa8249c47862e4822d45cea"} Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.847918 4767 scope.go:117] "RemoveContainer" containerID="7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.855460 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsq8k"] Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.874021 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsq8k"] Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.893885 4767 scope.go:117] "RemoveContainer" containerID="08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.934891 4767 scope.go:117] "RemoveContainer" containerID="c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96" Nov 24 23:26:35 crc kubenswrapper[4767]: E1124 23:26:35.936231 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96\": container with ID starting with c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96 not found: ID does not exist" containerID="c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.936337 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96"} err="failed to get container status \"c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96\": rpc error: code = NotFound desc = could not find container \"c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96\": container with ID starting with c23b2ada10761e5929ae63b653a01f775ad5677a87d80e1e366b3bba03ea4d96 not found: ID does not exist" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.936383 4767 scope.go:117] "RemoveContainer" containerID="7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9" Nov 24 23:26:35 crc kubenswrapper[4767]: E1124 23:26:35.937004 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9\": container with ID starting with 7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9 not found: ID does not exist" containerID="7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.937060 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9"} err="failed to get container status \"7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9\": rpc error: code = NotFound desc = could not find container \"7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9\": container with ID starting with 7a693ef8fe42e609d6b7c53cd15cfe526480c34f0dc9945b82de6de1233719c9 not found: ID does not exist" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.937107 4767 scope.go:117] "RemoveContainer" containerID="08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9" Nov 24 23:26:35 crc kubenswrapper[4767]: E1124 23:26:35.937663 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9\": container with ID starting with 08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9 not found: ID does not exist" containerID="08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9" Nov 24 23:26:35 crc kubenswrapper[4767]: I1124 23:26:35.937773 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9"} err="failed to get container status \"08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9\": rpc error: code = NotFound desc = could not find container \"08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9\": container with ID starting with 08486389c6dadefc9882171366746ff258b73b0d7c18cc56a4ed31c3ff33b4d9 not found: ID does not exist" Nov 24 23:26:36 crc kubenswrapper[4767]: I1124 23:26:36.332544 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" path="/var/lib/kubelet/pods/951043a2-c60a-49dd-80c4-4026f3f9b1e9/volumes" Nov 24 23:26:47 crc kubenswrapper[4767]: I1124 23:26:47.314111 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:26:47 crc kubenswrapper[4767]: E1124 23:26:47.315236 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:26:59 crc kubenswrapper[4767]: I1124 23:26:59.313208 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:26:59 crc kubenswrapper[4767]: E1124 23:26:59.314496 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:27:12 crc kubenswrapper[4767]: I1124 23:27:12.313053 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:27:12 crc kubenswrapper[4767]: E1124 23:27:12.313782 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:27:25 crc kubenswrapper[4767]: I1124 23:27:25.314308 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:27:25 crc kubenswrapper[4767]: E1124 23:27:25.316192 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:27:37 crc kubenswrapper[4767]: I1124 23:27:37.314133 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:27:37 crc kubenswrapper[4767]: E1124 23:27:37.316369 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:27:49 crc kubenswrapper[4767]: I1124 23:27:49.313721 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:27:49 crc kubenswrapper[4767]: E1124 23:27:49.315207 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:28:04 crc kubenswrapper[4767]: I1124 23:28:04.313999 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:28:04 crc kubenswrapper[4767]: E1124 23:28:04.316258 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:28:04 crc kubenswrapper[4767]: I1124 23:28:04.922993 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-64b748f489-f8d4f" podUID="92516271-3ccd-4f57-866d-7242ab4b50c6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 23:28:18 crc kubenswrapper[4767]: I1124 23:28:18.323182 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:28:18 crc kubenswrapper[4767]: E1124 23:28:18.324235 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:28:32 crc kubenswrapper[4767]: I1124 23:28:32.314511 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:28:32 crc kubenswrapper[4767]: E1124 23:28:32.315493 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:28:45 crc kubenswrapper[4767]: I1124 23:28:45.314209 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:28:45 crc kubenswrapper[4767]: E1124 23:28:45.315674 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:28:58 crc kubenswrapper[4767]: I1124 23:28:58.327024 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:28:58 crc kubenswrapper[4767]: E1124 23:28:58.328484 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.051762 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2p2p5"] Nov 24 23:29:11 crc kubenswrapper[4767]: E1124 23:29:11.053124 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="extract-content" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.053148 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="extract-content" Nov 24 23:29:11 crc kubenswrapper[4767]: E1124 23:29:11.053187 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="registry-server" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.053198 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="registry-server" Nov 24 23:29:11 crc kubenswrapper[4767]: E1124 23:29:11.053223 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="extract-utilities" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.053237 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="extract-utilities" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.053585 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="951043a2-c60a-49dd-80c4-4026f3f9b1e9" containerName="registry-server" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.055600 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.074029 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p2p5"] Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.152018 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-catalog-content\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.152124 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-utilities\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.152246 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2lxj\" (UniqueName: \"kubernetes.io/projected/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-kube-api-access-t2lxj\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.254147 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2lxj\" (UniqueName: \"kubernetes.io/projected/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-kube-api-access-t2lxj\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.254246 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-catalog-content\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.254353 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-utilities\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.254877 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-utilities\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.255287 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-catalog-content\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.277342 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2lxj\" (UniqueName: \"kubernetes.io/projected/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-kube-api-access-t2lxj\") pod \"redhat-marketplace-2p2p5\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.380836 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:11 crc kubenswrapper[4767]: I1124 23:29:11.865720 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p2p5"] Nov 24 23:29:12 crc kubenswrapper[4767]: E1124 23:29:12.293389 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c98b460_0de7_4ff9_b7d6_f23aeb11e616.slice/crio-conmon-fbbe543c7072719c949bd4cd6b1eecf5499d2316f1fb6743b28cfe9c9c142152.scope\": RecentStats: unable to find data in memory cache]" Nov 24 23:29:12 crc kubenswrapper[4767]: I1124 23:29:12.754791 4767 generic.go:334] "Generic (PLEG): container finished" podID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerID="fbbe543c7072719c949bd4cd6b1eecf5499d2316f1fb6743b28cfe9c9c142152" exitCode=0 Nov 24 23:29:12 crc kubenswrapper[4767]: I1124 23:29:12.754931 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p2p5" event={"ID":"8c98b460-0de7-4ff9-b7d6-f23aeb11e616","Type":"ContainerDied","Data":"fbbe543c7072719c949bd4cd6b1eecf5499d2316f1fb6743b28cfe9c9c142152"} Nov 24 23:29:12 crc kubenswrapper[4767]: I1124 23:29:12.755110 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p2p5" event={"ID":"8c98b460-0de7-4ff9-b7d6-f23aeb11e616","Type":"ContainerStarted","Data":"9586e4e015911a5d688c114e351ef092b2a534243912d1a0909d9dd6915c8ec0"} Nov 24 23:29:12 crc kubenswrapper[4767]: I1124 23:29:12.757203 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:29:13 crc kubenswrapper[4767]: I1124 23:29:13.313846 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:29:13 crc kubenswrapper[4767]: E1124 23:29:13.314735 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:29:13 crc kubenswrapper[4767]: I1124 23:29:13.766745 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p2p5" event={"ID":"8c98b460-0de7-4ff9-b7d6-f23aeb11e616","Type":"ContainerStarted","Data":"6bc1e72eb6aa3156128a8f2d7e6acf2b65cc65b68f6a9f51995e1264b0dca9c4"} Nov 24 23:29:14 crc kubenswrapper[4767]: I1124 23:29:14.781501 4767 generic.go:334] "Generic (PLEG): container finished" podID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerID="6bc1e72eb6aa3156128a8f2d7e6acf2b65cc65b68f6a9f51995e1264b0dca9c4" exitCode=0 Nov 24 23:29:14 crc kubenswrapper[4767]: I1124 23:29:14.781950 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p2p5" event={"ID":"8c98b460-0de7-4ff9-b7d6-f23aeb11e616","Type":"ContainerDied","Data":"6bc1e72eb6aa3156128a8f2d7e6acf2b65cc65b68f6a9f51995e1264b0dca9c4"} Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.794211 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p2p5" event={"ID":"8c98b460-0de7-4ff9-b7d6-f23aeb11e616","Type":"ContainerStarted","Data":"8f85662032e9be473ba58f6fe190b8c004a22e64c4fa7adfd5f2e5854fb2f80a"} Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.820460 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2p2p5" podStartSLOduration=2.38155856 podStartE2EDuration="4.820435461s" podCreationTimestamp="2025-11-24 23:29:11 +0000 UTC" firstStartedPulling="2025-11-24 23:29:12.756990493 +0000 UTC m=+6635.673973865" lastFinishedPulling="2025-11-24 23:29:15.195867394 +0000 UTC m=+6638.112850766" observedRunningTime="2025-11-24 23:29:15.812152057 +0000 UTC m=+6638.729135449" watchObservedRunningTime="2025-11-24 23:29:15.820435461 +0000 UTC m=+6638.737418873" Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.840184 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7fhnz"] Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.842701 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.861135 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7fhnz"] Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.945206 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-catalog-content\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.945435 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-utilities\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:15 crc kubenswrapper[4767]: I1124 23:29:15.945477 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94fj\" (UniqueName: \"kubernetes.io/projected/a830dab9-ca6a-461a-91b5-95abe15f32ec-kube-api-access-b94fj\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.047987 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-utilities\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.048046 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94fj\" (UniqueName: \"kubernetes.io/projected/a830dab9-ca6a-461a-91b5-95abe15f32ec-kube-api-access-b94fj\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.048155 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-catalog-content\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.048652 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-utilities\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.048663 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-catalog-content\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.085517 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94fj\" (UniqueName: \"kubernetes.io/projected/a830dab9-ca6a-461a-91b5-95abe15f32ec-kube-api-access-b94fj\") pod \"certified-operators-7fhnz\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.164421 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.668051 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7fhnz"] Nov 24 23:29:16 crc kubenswrapper[4767]: W1124 23:29:16.671020 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda830dab9_ca6a_461a_91b5_95abe15f32ec.slice/crio-385058e86cb9de90953938f178336def314f59ebf9fe78b8dd771240e32709d7 WatchSource:0}: Error finding container 385058e86cb9de90953938f178336def314f59ebf9fe78b8dd771240e32709d7: Status 404 returned error can't find the container with id 385058e86cb9de90953938f178336def314f59ebf9fe78b8dd771240e32709d7 Nov 24 23:29:16 crc kubenswrapper[4767]: I1124 23:29:16.814220 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fhnz" event={"ID":"a830dab9-ca6a-461a-91b5-95abe15f32ec","Type":"ContainerStarted","Data":"385058e86cb9de90953938f178336def314f59ebf9fe78b8dd771240e32709d7"} Nov 24 23:29:17 crc kubenswrapper[4767]: I1124 23:29:17.824457 4767 generic.go:334] "Generic (PLEG): container finished" podID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerID="00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8" exitCode=0 Nov 24 23:29:17 crc kubenswrapper[4767]: I1124 23:29:17.824569 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fhnz" event={"ID":"a830dab9-ca6a-461a-91b5-95abe15f32ec","Type":"ContainerDied","Data":"00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8"} Nov 24 23:29:18 crc kubenswrapper[4767]: I1124 23:29:18.835950 4767 generic.go:334] "Generic (PLEG): container finished" podID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerID="f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510" exitCode=0 Nov 24 23:29:18 crc kubenswrapper[4767]: I1124 23:29:18.836029 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fhnz" event={"ID":"a830dab9-ca6a-461a-91b5-95abe15f32ec","Type":"ContainerDied","Data":"f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510"} Nov 24 23:29:19 crc kubenswrapper[4767]: I1124 23:29:19.856946 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fhnz" event={"ID":"a830dab9-ca6a-461a-91b5-95abe15f32ec","Type":"ContainerStarted","Data":"a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141"} Nov 24 23:29:19 crc kubenswrapper[4767]: I1124 23:29:19.891485 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7fhnz" podStartSLOduration=3.485804427 podStartE2EDuration="4.891460136s" podCreationTimestamp="2025-11-24 23:29:15 +0000 UTC" firstStartedPulling="2025-11-24 23:29:17.82731182 +0000 UTC m=+6640.744295192" lastFinishedPulling="2025-11-24 23:29:19.232967529 +0000 UTC m=+6642.149950901" observedRunningTime="2025-11-24 23:29:19.878568981 +0000 UTC m=+6642.795552373" watchObservedRunningTime="2025-11-24 23:29:19.891460136 +0000 UTC m=+6642.808443518" Nov 24 23:29:21 crc kubenswrapper[4767]: I1124 23:29:21.381487 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:21 crc kubenswrapper[4767]: I1124 23:29:21.382003 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:21 crc kubenswrapper[4767]: I1124 23:29:21.448163 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:21 crc kubenswrapper[4767]: I1124 23:29:21.942757 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:23 crc kubenswrapper[4767]: I1124 23:29:23.280260 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p2p5"] Nov 24 23:29:23 crc kubenswrapper[4767]: I1124 23:29:23.901347 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2p2p5" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="registry-server" containerID="cri-o://8f85662032e9be473ba58f6fe190b8c004a22e64c4fa7adfd5f2e5854fb2f80a" gracePeriod=2 Nov 24 23:29:24 crc kubenswrapper[4767]: I1124 23:29:24.913640 4767 generic.go:334] "Generic (PLEG): container finished" podID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerID="8f85662032e9be473ba58f6fe190b8c004a22e64c4fa7adfd5f2e5854fb2f80a" exitCode=0 Nov 24 23:29:24 crc kubenswrapper[4767]: I1124 23:29:24.913753 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p2p5" event={"ID":"8c98b460-0de7-4ff9-b7d6-f23aeb11e616","Type":"ContainerDied","Data":"8f85662032e9be473ba58f6fe190b8c004a22e64c4fa7adfd5f2e5854fb2f80a"} Nov 24 23:29:24 crc kubenswrapper[4767]: I1124 23:29:24.914288 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p2p5" event={"ID":"8c98b460-0de7-4ff9-b7d6-f23aeb11e616","Type":"ContainerDied","Data":"9586e4e015911a5d688c114e351ef092b2a534243912d1a0909d9dd6915c8ec0"} Nov 24 23:29:24 crc kubenswrapper[4767]: I1124 23:29:24.914319 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9586e4e015911a5d688c114e351ef092b2a534243912d1a0909d9dd6915c8ec0" Nov 24 23:29:24 crc kubenswrapper[4767]: I1124 23:29:24.963563 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.043536 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2lxj\" (UniqueName: \"kubernetes.io/projected/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-kube-api-access-t2lxj\") pod \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.043596 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-utilities\") pod \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.043762 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-catalog-content\") pod \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\" (UID: \"8c98b460-0de7-4ff9-b7d6-f23aeb11e616\") " Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.045094 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-utilities" (OuterVolumeSpecName: "utilities") pod "8c98b460-0de7-4ff9-b7d6-f23aeb11e616" (UID: "8c98b460-0de7-4ff9-b7d6-f23aeb11e616"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.050658 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-kube-api-access-t2lxj" (OuterVolumeSpecName: "kube-api-access-t2lxj") pod "8c98b460-0de7-4ff9-b7d6-f23aeb11e616" (UID: "8c98b460-0de7-4ff9-b7d6-f23aeb11e616"). InnerVolumeSpecName "kube-api-access-t2lxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.066699 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c98b460-0de7-4ff9-b7d6-f23aeb11e616" (UID: "8c98b460-0de7-4ff9-b7d6-f23aeb11e616"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.147355 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.147401 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2lxj\" (UniqueName: \"kubernetes.io/projected/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-kube-api-access-t2lxj\") on node \"crc\" DevicePath \"\"" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.147422 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c98b460-0de7-4ff9-b7d6-f23aeb11e616-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.923597 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p2p5" Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.973745 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p2p5"] Nov 24 23:29:25 crc kubenswrapper[4767]: I1124 23:29:25.984093 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p2p5"] Nov 24 23:29:26 crc kubenswrapper[4767]: I1124 23:29:26.165549 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:26 crc kubenswrapper[4767]: I1124 23:29:26.165594 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:26 crc kubenswrapper[4767]: I1124 23:29:26.246240 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:26 crc kubenswrapper[4767]: I1124 23:29:26.313816 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:29:26 crc kubenswrapper[4767]: E1124 23:29:26.314132 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:29:26 crc kubenswrapper[4767]: I1124 23:29:26.326018 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" path="/var/lib/kubelet/pods/8c98b460-0de7-4ff9-b7d6-f23aeb11e616/volumes" Nov 24 23:29:27 crc kubenswrapper[4767]: I1124 23:29:27.013849 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:28 crc kubenswrapper[4767]: I1124 23:29:28.432525 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7fhnz"] Nov 24 23:29:28 crc kubenswrapper[4767]: I1124 23:29:28.952048 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7fhnz" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="registry-server" containerID="cri-o://a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141" gracePeriod=2 Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.909603 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.953320 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-catalog-content\") pod \"a830dab9-ca6a-461a-91b5-95abe15f32ec\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.953759 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-utilities\") pod \"a830dab9-ca6a-461a-91b5-95abe15f32ec\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.953855 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b94fj\" (UniqueName: \"kubernetes.io/projected/a830dab9-ca6a-461a-91b5-95abe15f32ec-kube-api-access-b94fj\") pod \"a830dab9-ca6a-461a-91b5-95abe15f32ec\" (UID: \"a830dab9-ca6a-461a-91b5-95abe15f32ec\") " Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.955157 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-utilities" (OuterVolumeSpecName: "utilities") pod "a830dab9-ca6a-461a-91b5-95abe15f32ec" (UID: "a830dab9-ca6a-461a-91b5-95abe15f32ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.961309 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a830dab9-ca6a-461a-91b5-95abe15f32ec-kube-api-access-b94fj" (OuterVolumeSpecName: "kube-api-access-b94fj") pod "a830dab9-ca6a-461a-91b5-95abe15f32ec" (UID: "a830dab9-ca6a-461a-91b5-95abe15f32ec"). InnerVolumeSpecName "kube-api-access-b94fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.964834 4767 generic.go:334] "Generic (PLEG): container finished" podID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerID="a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141" exitCode=0 Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.964870 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fhnz" event={"ID":"a830dab9-ca6a-461a-91b5-95abe15f32ec","Type":"ContainerDied","Data":"a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141"} Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.964918 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fhnz" event={"ID":"a830dab9-ca6a-461a-91b5-95abe15f32ec","Type":"ContainerDied","Data":"385058e86cb9de90953938f178336def314f59ebf9fe78b8dd771240e32709d7"} Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.964940 4767 scope.go:117] "RemoveContainer" containerID="a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141" Nov 24 23:29:29 crc kubenswrapper[4767]: I1124 23:29:29.965076 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fhnz" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.010198 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a830dab9-ca6a-461a-91b5-95abe15f32ec" (UID: "a830dab9-ca6a-461a-91b5-95abe15f32ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.026835 4767 scope.go:117] "RemoveContainer" containerID="f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.052602 4767 scope.go:117] "RemoveContainer" containerID="00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.055620 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.055639 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830dab9-ca6a-461a-91b5-95abe15f32ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.055649 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b94fj\" (UniqueName: \"kubernetes.io/projected/a830dab9-ca6a-461a-91b5-95abe15f32ec-kube-api-access-b94fj\") on node \"crc\" DevicePath \"\"" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.094680 4767 scope.go:117] "RemoveContainer" containerID="a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141" Nov 24 23:29:30 crc kubenswrapper[4767]: E1124 23:29:30.096682 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141\": container with ID starting with a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141 not found: ID does not exist" containerID="a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.096751 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141"} err="failed to get container status \"a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141\": rpc error: code = NotFound desc = could not find container \"a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141\": container with ID starting with a0aff9ff97100b9a9690db8a35b57fc3389603c7e80c4b77e247b8470967b141 not found: ID does not exist" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.096802 4767 scope.go:117] "RemoveContainer" containerID="f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510" Nov 24 23:29:30 crc kubenswrapper[4767]: E1124 23:29:30.097210 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510\": container with ID starting with f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510 not found: ID does not exist" containerID="f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.097253 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510"} err="failed to get container status \"f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510\": rpc error: code = NotFound desc = could not find container \"f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510\": container with ID starting with f6a8bc582500e3d2e94d4951187f57383da488237c373b87147a14bcdd960510 not found: ID does not exist" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.097296 4767 scope.go:117] "RemoveContainer" containerID="00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8" Nov 24 23:29:30 crc kubenswrapper[4767]: E1124 23:29:30.097577 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8\": container with ID starting with 00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8 not found: ID does not exist" containerID="00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.097604 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8"} err="failed to get container status \"00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8\": rpc error: code = NotFound desc = could not find container \"00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8\": container with ID starting with 00d097cda9007f9bba2b068202547fb45eac0774ffa1d49caef12f9c1c5407d8 not found: ID does not exist" Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.334879 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7fhnz"] Nov 24 23:29:30 crc kubenswrapper[4767]: I1124 23:29:30.334941 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7fhnz"] Nov 24 23:29:32 crc kubenswrapper[4767]: I1124 23:29:32.333413 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" path="/var/lib/kubelet/pods/a830dab9-ca6a-461a-91b5-95abe15f32ec/volumes" Nov 24 23:29:40 crc kubenswrapper[4767]: I1124 23:29:40.314396 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:29:41 crc kubenswrapper[4767]: I1124 23:29:41.102399 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"5c7cc72074d182d5318650835206882a6f9a9af381df20391df98140e9145d85"} Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.199121 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5"] Nov 24 23:30:00 crc kubenswrapper[4767]: E1124 23:30:00.200021 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="extract-content" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200034 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="extract-content" Nov 24 23:30:00 crc kubenswrapper[4767]: E1124 23:30:00.200046 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="extract-content" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200052 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="extract-content" Nov 24 23:30:00 crc kubenswrapper[4767]: E1124 23:30:00.200063 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="extract-utilities" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200069 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="extract-utilities" Nov 24 23:30:00 crc kubenswrapper[4767]: E1124 23:30:00.200088 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="registry-server" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200094 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="registry-server" Nov 24 23:30:00 crc kubenswrapper[4767]: E1124 23:30:00.200113 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="registry-server" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200118 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="registry-server" Nov 24 23:30:00 crc kubenswrapper[4767]: E1124 23:30:00.200130 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="extract-utilities" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200135 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="extract-utilities" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200378 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="a830dab9-ca6a-461a-91b5-95abe15f32ec" containerName="registry-server" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.200400 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c98b460-0de7-4ff9-b7d6-f23aeb11e616" containerName="registry-server" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.201193 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.203338 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.207608 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.212182 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5"] Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.334319 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9552cf12-4598-425f-89c3-4208d9c39b7c-secret-volume\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.334648 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9552cf12-4598-425f-89c3-4208d9c39b7c-config-volume\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.334757 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9m9m\" (UniqueName: \"kubernetes.io/projected/9552cf12-4598-425f-89c3-4208d9c39b7c-kube-api-access-p9m9m\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.437111 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9552cf12-4598-425f-89c3-4208d9c39b7c-secret-volume\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.437183 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9552cf12-4598-425f-89c3-4208d9c39b7c-config-volume\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.437200 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9m9m\" (UniqueName: \"kubernetes.io/projected/9552cf12-4598-425f-89c3-4208d9c39b7c-kube-api-access-p9m9m\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.439408 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9552cf12-4598-425f-89c3-4208d9c39b7c-config-volume\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.447678 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9552cf12-4598-425f-89c3-4208d9c39b7c-secret-volume\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.479700 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9m9m\" (UniqueName: \"kubernetes.io/projected/9552cf12-4598-425f-89c3-4208d9c39b7c-kube-api-access-p9m9m\") pod \"collect-profiles-29400450-5wmk5\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.532172 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.594110 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xf7h4/must-gather-jffcq"] Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.602339 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.611630 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xf7h4"/"openshift-service-ca.crt" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.611706 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xf7h4"/"kube-root-ca.crt" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.611897 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xf7h4"/"default-dockercfg-drv7n" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.640203 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a93d272-c118-4fa4-9e21-608657fd04a0-must-gather-output\") pod \"must-gather-jffcq\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.640258 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwqj\" (UniqueName: \"kubernetes.io/projected/7a93d272-c118-4fa4-9e21-608657fd04a0-kube-api-access-nwwqj\") pod \"must-gather-jffcq\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.643028 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xf7h4/must-gather-jffcq"] Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.741741 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a93d272-c118-4fa4-9e21-608657fd04a0-must-gather-output\") pod \"must-gather-jffcq\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.741802 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwwqj\" (UniqueName: \"kubernetes.io/projected/7a93d272-c118-4fa4-9e21-608657fd04a0-kube-api-access-nwwqj\") pod \"must-gather-jffcq\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.742700 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a93d272-c118-4fa4-9e21-608657fd04a0-must-gather-output\") pod \"must-gather-jffcq\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:00 crc kubenswrapper[4767]: I1124 23:30:00.792063 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwwqj\" (UniqueName: \"kubernetes.io/projected/7a93d272-c118-4fa4-9e21-608657fd04a0-kube-api-access-nwwqj\") pod \"must-gather-jffcq\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:01 crc kubenswrapper[4767]: I1124 23:30:01.007557 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:30:01 crc kubenswrapper[4767]: I1124 23:30:01.148514 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5"] Nov 24 23:30:01 crc kubenswrapper[4767]: W1124 23:30:01.148634 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9552cf12_4598_425f_89c3_4208d9c39b7c.slice/crio-fb4eedd48e5a4f101ca456ad4182521b600f6c6ced0733a1ed3f91651e19aa79 WatchSource:0}: Error finding container fb4eedd48e5a4f101ca456ad4182521b600f6c6ced0733a1ed3f91651e19aa79: Status 404 returned error can't find the container with id fb4eedd48e5a4f101ca456ad4182521b600f6c6ced0733a1ed3f91651e19aa79 Nov 24 23:30:01 crc kubenswrapper[4767]: I1124 23:30:01.371529 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" event={"ID":"9552cf12-4598-425f-89c3-4208d9c39b7c","Type":"ContainerStarted","Data":"45c7f3aca6a47beeb04ecb9a5ccd8577704980c5d8d034b203a6754157210abb"} Nov 24 23:30:01 crc kubenswrapper[4767]: I1124 23:30:01.371573 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" event={"ID":"9552cf12-4598-425f-89c3-4208d9c39b7c","Type":"ContainerStarted","Data":"fb4eedd48e5a4f101ca456ad4182521b600f6c6ced0733a1ed3f91651e19aa79"} Nov 24 23:30:01 crc kubenswrapper[4767]: I1124 23:30:01.392600 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" podStartSLOduration=1.3925754989999999 podStartE2EDuration="1.392575499s" podCreationTimestamp="2025-11-24 23:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 23:30:01.38484416 +0000 UTC m=+6684.301827552" watchObservedRunningTime="2025-11-24 23:30:01.392575499 +0000 UTC m=+6684.309558871" Nov 24 23:30:01 crc kubenswrapper[4767]: W1124 23:30:01.515894 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a93d272_c118_4fa4_9e21_608657fd04a0.slice/crio-bfec134be41b696e39b23b076d6ff094f65596a5cabaa286c00cb825cfcb6b66 WatchSource:0}: Error finding container bfec134be41b696e39b23b076d6ff094f65596a5cabaa286c00cb825cfcb6b66: Status 404 returned error can't find the container with id bfec134be41b696e39b23b076d6ff094f65596a5cabaa286c00cb825cfcb6b66 Nov 24 23:30:01 crc kubenswrapper[4767]: I1124 23:30:01.518081 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xf7h4/must-gather-jffcq"] Nov 24 23:30:02 crc kubenswrapper[4767]: I1124 23:30:02.382731 4767 generic.go:334] "Generic (PLEG): container finished" podID="9552cf12-4598-425f-89c3-4208d9c39b7c" containerID="45c7f3aca6a47beeb04ecb9a5ccd8577704980c5d8d034b203a6754157210abb" exitCode=0 Nov 24 23:30:02 crc kubenswrapper[4767]: I1124 23:30:02.382779 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" event={"ID":"9552cf12-4598-425f-89c3-4208d9c39b7c","Type":"ContainerDied","Data":"45c7f3aca6a47beeb04ecb9a5ccd8577704980c5d8d034b203a6754157210abb"} Nov 24 23:30:02 crc kubenswrapper[4767]: I1124 23:30:02.384247 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/must-gather-jffcq" event={"ID":"7a93d272-c118-4fa4-9e21-608657fd04a0","Type":"ContainerStarted","Data":"bfec134be41b696e39b23b076d6ff094f65596a5cabaa286c00cb825cfcb6b66"} Nov 24 23:30:03 crc kubenswrapper[4767]: I1124 23:30:03.754691 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:03 crc kubenswrapper[4767]: I1124 23:30:03.898047 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9552cf12-4598-425f-89c3-4208d9c39b7c-config-volume\") pod \"9552cf12-4598-425f-89c3-4208d9c39b7c\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " Nov 24 23:30:03 crc kubenswrapper[4767]: I1124 23:30:03.898404 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9552cf12-4598-425f-89c3-4208d9c39b7c-secret-volume\") pod \"9552cf12-4598-425f-89c3-4208d9c39b7c\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " Nov 24 23:30:03 crc kubenswrapper[4767]: I1124 23:30:03.898460 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9m9m\" (UniqueName: \"kubernetes.io/projected/9552cf12-4598-425f-89c3-4208d9c39b7c-kube-api-access-p9m9m\") pod \"9552cf12-4598-425f-89c3-4208d9c39b7c\" (UID: \"9552cf12-4598-425f-89c3-4208d9c39b7c\") " Nov 24 23:30:03 crc kubenswrapper[4767]: I1124 23:30:03.900195 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9552cf12-4598-425f-89c3-4208d9c39b7c-config-volume" (OuterVolumeSpecName: "config-volume") pod "9552cf12-4598-425f-89c3-4208d9c39b7c" (UID: "9552cf12-4598-425f-89c3-4208d9c39b7c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 23:30:03 crc kubenswrapper[4767]: I1124 23:30:03.915409 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9552cf12-4598-425f-89c3-4208d9c39b7c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9552cf12-4598-425f-89c3-4208d9c39b7c" (UID: "9552cf12-4598-425f-89c3-4208d9c39b7c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:30:03 crc kubenswrapper[4767]: I1124 23:30:03.929084 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9552cf12-4598-425f-89c3-4208d9c39b7c-kube-api-access-p9m9m" (OuterVolumeSpecName: "kube-api-access-p9m9m") pod "9552cf12-4598-425f-89c3-4208d9c39b7c" (UID: "9552cf12-4598-425f-89c3-4208d9c39b7c"). InnerVolumeSpecName "kube-api-access-p9m9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.000021 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9m9m\" (UniqueName: \"kubernetes.io/projected/9552cf12-4598-425f-89c3-4208d9c39b7c-kube-api-access-p9m9m\") on node \"crc\" DevicePath \"\"" Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.000051 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9552cf12-4598-425f-89c3-4208d9c39b7c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.000060 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9552cf12-4598-425f-89c3-4208d9c39b7c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.402422 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" event={"ID":"9552cf12-4598-425f-89c3-4208d9c39b7c","Type":"ContainerDied","Data":"fb4eedd48e5a4f101ca456ad4182521b600f6c6ced0733a1ed3f91651e19aa79"} Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.402458 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb4eedd48e5a4f101ca456ad4182521b600f6c6ced0733a1ed3f91651e19aa79" Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.402488 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400450-5wmk5" Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.462579 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm"] Nov 24 23:30:04 crc kubenswrapper[4767]: I1124 23:30:04.470623 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-xh5jm"] Nov 24 23:30:06 crc kubenswrapper[4767]: I1124 23:30:06.329370 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60baf41d-8aa3-4b07-a344-a3357d37ca4d" path="/var/lib/kubelet/pods/60baf41d-8aa3-4b07-a344-a3357d37ca4d/volumes" Nov 24 23:30:06 crc kubenswrapper[4767]: I1124 23:30:06.422367 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/must-gather-jffcq" event={"ID":"7a93d272-c118-4fa4-9e21-608657fd04a0","Type":"ContainerStarted","Data":"90d6cae2bd518dd4ae469c74d35fc3f6e1da72bab16223586b5e4bc3cdae9580"} Nov 24 23:30:07 crc kubenswrapper[4767]: I1124 23:30:07.432192 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/must-gather-jffcq" event={"ID":"7a93d272-c118-4fa4-9e21-608657fd04a0","Type":"ContainerStarted","Data":"abec92cf451669e9fd62dd4e1f8bd9f62c87281384423028b0fb17b421053687"} Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.448849 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xf7h4/must-gather-jffcq" podStartSLOduration=7.000565696 podStartE2EDuration="11.448830416s" podCreationTimestamp="2025-11-24 23:30:00 +0000 UTC" firstStartedPulling="2025-11-24 23:30:01.518760586 +0000 UTC m=+6684.435743958" lastFinishedPulling="2025-11-24 23:30:05.967025306 +0000 UTC m=+6688.884008678" observedRunningTime="2025-11-24 23:30:07.458954336 +0000 UTC m=+6690.375937718" watchObservedRunningTime="2025-11-24 23:30:11.448830416 +0000 UTC m=+6694.365813788" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.453686 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-s2b9p"] Nov 24 23:30:11 crc kubenswrapper[4767]: E1124 23:30:11.454064 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9552cf12-4598-425f-89c3-4208d9c39b7c" containerName="collect-profiles" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.454079 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9552cf12-4598-425f-89c3-4208d9c39b7c" containerName="collect-profiles" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.454255 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9552cf12-4598-425f-89c3-4208d9c39b7c" containerName="collect-profiles" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.454925 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.503351 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-host\") pod \"crc-debug-s2b9p\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.505214 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5fvr\" (UniqueName: \"kubernetes.io/projected/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-kube-api-access-h5fvr\") pod \"crc-debug-s2b9p\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.613667 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-host\") pod \"crc-debug-s2b9p\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.613812 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5fvr\" (UniqueName: \"kubernetes.io/projected/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-kube-api-access-h5fvr\") pod \"crc-debug-s2b9p\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.614362 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-host\") pod \"crc-debug-s2b9p\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.644491 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5fvr\" (UniqueName: \"kubernetes.io/projected/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-kube-api-access-h5fvr\") pod \"crc-debug-s2b9p\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:11 crc kubenswrapper[4767]: I1124 23:30:11.772630 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:30:12 crc kubenswrapper[4767]: I1124 23:30:12.480689 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" event={"ID":"1f3f54ef-33a6-45f7-b28f-ffe5889d2945","Type":"ContainerStarted","Data":"12f70e7bc806df3ea4ee3a6de2184c2704d51724ab34ee7cfd190aab68432b28"} Nov 24 23:30:22 crc kubenswrapper[4767]: I1124 23:30:22.573217 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" event={"ID":"1f3f54ef-33a6-45f7-b28f-ffe5889d2945","Type":"ContainerStarted","Data":"3921daa2e34f164a20baa6f69e45d77e0acabcce7d22fff9f4f93c97b3039e07"} Nov 24 23:30:22 crc kubenswrapper[4767]: I1124 23:30:22.592331 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" podStartSLOduration=1.228401575 podStartE2EDuration="11.59230591s" podCreationTimestamp="2025-11-24 23:30:11 +0000 UTC" firstStartedPulling="2025-11-24 23:30:11.828907961 +0000 UTC m=+6694.745891333" lastFinishedPulling="2025-11-24 23:30:22.192812296 +0000 UTC m=+6705.109795668" observedRunningTime="2025-11-24 23:30:22.587836744 +0000 UTC m=+6705.504820116" watchObservedRunningTime="2025-11-24 23:30:22.59230591 +0000 UTC m=+6705.509289282" Nov 24 23:30:46 crc kubenswrapper[4767]: I1124 23:30:46.226964 4767 scope.go:117] "RemoveContainer" containerID="111b8465fb1b39b8f767442e11e334c4b3d3f6b52cfddf6b5ca9a679bf9201a7" Nov 24 23:31:13 crc kubenswrapper[4767]: I1124 23:31:13.075900 4767 generic.go:334] "Generic (PLEG): container finished" podID="1f3f54ef-33a6-45f7-b28f-ffe5889d2945" containerID="3921daa2e34f164a20baa6f69e45d77e0acabcce7d22fff9f4f93c97b3039e07" exitCode=0 Nov 24 23:31:13 crc kubenswrapper[4767]: I1124 23:31:13.075987 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" event={"ID":"1f3f54ef-33a6-45f7-b28f-ffe5889d2945","Type":"ContainerDied","Data":"3921daa2e34f164a20baa6f69e45d77e0acabcce7d22fff9f4f93c97b3039e07"} Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.244951 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.288916 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-s2b9p"] Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.297699 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-s2b9p"] Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.362660 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5fvr\" (UniqueName: \"kubernetes.io/projected/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-kube-api-access-h5fvr\") pod \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.363082 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-host\") pod \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\" (UID: \"1f3f54ef-33a6-45f7-b28f-ffe5889d2945\") " Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.363216 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-host" (OuterVolumeSpecName: "host") pod "1f3f54ef-33a6-45f7-b28f-ffe5889d2945" (UID: "1f3f54ef-33a6-45f7-b28f-ffe5889d2945"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.364218 4767 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-host\") on node \"crc\" DevicePath \"\"" Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.369733 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-kube-api-access-h5fvr" (OuterVolumeSpecName: "kube-api-access-h5fvr") pod "1f3f54ef-33a6-45f7-b28f-ffe5889d2945" (UID: "1f3f54ef-33a6-45f7-b28f-ffe5889d2945"). InnerVolumeSpecName "kube-api-access-h5fvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:31:14 crc kubenswrapper[4767]: I1124 23:31:14.465887 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5fvr\" (UniqueName: \"kubernetes.io/projected/1f3f54ef-33a6-45f7-b28f-ffe5889d2945-kube-api-access-h5fvr\") on node \"crc\" DevicePath \"\"" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.099010 4767 scope.go:117] "RemoveContainer" containerID="3921daa2e34f164a20baa6f69e45d77e0acabcce7d22fff9f4f93c97b3039e07" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.099043 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-s2b9p" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.504908 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-rkf8f"] Nov 24 23:31:15 crc kubenswrapper[4767]: E1124 23:31:15.505545 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3f54ef-33a6-45f7-b28f-ffe5889d2945" containerName="container-00" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.505558 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3f54ef-33a6-45f7-b28f-ffe5889d2945" containerName="container-00" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.505755 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3f54ef-33a6-45f7-b28f-ffe5889d2945" containerName="container-00" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.506427 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.587613 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7483195a-b928-4b24-a9bc-014d5e38f355-host\") pod \"crc-debug-rkf8f\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.587831 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttnb6\" (UniqueName: \"kubernetes.io/projected/7483195a-b928-4b24-a9bc-014d5e38f355-kube-api-access-ttnb6\") pod \"crc-debug-rkf8f\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.690079 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7483195a-b928-4b24-a9bc-014d5e38f355-host\") pod \"crc-debug-rkf8f\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.690345 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7483195a-b928-4b24-a9bc-014d5e38f355-host\") pod \"crc-debug-rkf8f\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.690536 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttnb6\" (UniqueName: \"kubernetes.io/projected/7483195a-b928-4b24-a9bc-014d5e38f355-kube-api-access-ttnb6\") pod \"crc-debug-rkf8f\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.709053 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttnb6\" (UniqueName: \"kubernetes.io/projected/7483195a-b928-4b24-a9bc-014d5e38f355-kube-api-access-ttnb6\") pod \"crc-debug-rkf8f\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: I1124 23:31:15.829512 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:15 crc kubenswrapper[4767]: W1124 23:31:15.871672 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7483195a_b928_4b24_a9bc_014d5e38f355.slice/crio-6a2668e2a8a538218262a51e3c9d6512b01ed258c722eb622bc4186537cd80ae WatchSource:0}: Error finding container 6a2668e2a8a538218262a51e3c9d6512b01ed258c722eb622bc4186537cd80ae: Status 404 returned error can't find the container with id 6a2668e2a8a538218262a51e3c9d6512b01ed258c722eb622bc4186537cd80ae Nov 24 23:31:16 crc kubenswrapper[4767]: I1124 23:31:16.116942 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" event={"ID":"7483195a-b928-4b24-a9bc-014d5e38f355","Type":"ContainerStarted","Data":"6a2668e2a8a538218262a51e3c9d6512b01ed258c722eb622bc4186537cd80ae"} Nov 24 23:31:16 crc kubenswrapper[4767]: I1124 23:31:16.328301 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3f54ef-33a6-45f7-b28f-ffe5889d2945" path="/var/lib/kubelet/pods/1f3f54ef-33a6-45f7-b28f-ffe5889d2945/volumes" Nov 24 23:31:17 crc kubenswrapper[4767]: I1124 23:31:17.129918 4767 generic.go:334] "Generic (PLEG): container finished" podID="7483195a-b928-4b24-a9bc-014d5e38f355" containerID="5c02dafd4694a9b4b8044a4c6fede0cfc4fb32c28cc045581bdffac7b4c80315" exitCode=0 Nov 24 23:31:17 crc kubenswrapper[4767]: I1124 23:31:17.130017 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" event={"ID":"7483195a-b928-4b24-a9bc-014d5e38f355","Type":"ContainerDied","Data":"5c02dafd4694a9b4b8044a4c6fede0cfc4fb32c28cc045581bdffac7b4c80315"} Nov 24 23:31:18 crc kubenswrapper[4767]: I1124 23:31:18.247097 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:18 crc kubenswrapper[4767]: I1124 23:31:18.444817 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7483195a-b928-4b24-a9bc-014d5e38f355-host\") pod \"7483195a-b928-4b24-a9bc-014d5e38f355\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " Nov 24 23:31:18 crc kubenswrapper[4767]: I1124 23:31:18.444894 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttnb6\" (UniqueName: \"kubernetes.io/projected/7483195a-b928-4b24-a9bc-014d5e38f355-kube-api-access-ttnb6\") pod \"7483195a-b928-4b24-a9bc-014d5e38f355\" (UID: \"7483195a-b928-4b24-a9bc-014d5e38f355\") " Nov 24 23:31:18 crc kubenswrapper[4767]: I1124 23:31:18.445001 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7483195a-b928-4b24-a9bc-014d5e38f355-host" (OuterVolumeSpecName: "host") pod "7483195a-b928-4b24-a9bc-014d5e38f355" (UID: "7483195a-b928-4b24-a9bc-014d5e38f355"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 23:31:18 crc kubenswrapper[4767]: I1124 23:31:18.448066 4767 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7483195a-b928-4b24-a9bc-014d5e38f355-host\") on node \"crc\" DevicePath \"\"" Nov 24 23:31:18 crc kubenswrapper[4767]: I1124 23:31:18.458536 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7483195a-b928-4b24-a9bc-014d5e38f355-kube-api-access-ttnb6" (OuterVolumeSpecName: "kube-api-access-ttnb6") pod "7483195a-b928-4b24-a9bc-014d5e38f355" (UID: "7483195a-b928-4b24-a9bc-014d5e38f355"). InnerVolumeSpecName "kube-api-access-ttnb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:31:18 crc kubenswrapper[4767]: I1124 23:31:18.549445 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttnb6\" (UniqueName: \"kubernetes.io/projected/7483195a-b928-4b24-a9bc-014d5e38f355-kube-api-access-ttnb6\") on node \"crc\" DevicePath \"\"" Nov 24 23:31:19 crc kubenswrapper[4767]: I1124 23:31:19.151565 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" event={"ID":"7483195a-b928-4b24-a9bc-014d5e38f355","Type":"ContainerDied","Data":"6a2668e2a8a538218262a51e3c9d6512b01ed258c722eb622bc4186537cd80ae"} Nov 24 23:31:19 crc kubenswrapper[4767]: I1124 23:31:19.151884 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2668e2a8a538218262a51e3c9d6512b01ed258c722eb622bc4186537cd80ae" Nov 24 23:31:19 crc kubenswrapper[4767]: I1124 23:31:19.151597 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-rkf8f" Nov 24 23:31:19 crc kubenswrapper[4767]: I1124 23:31:19.777755 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-rkf8f"] Nov 24 23:31:19 crc kubenswrapper[4767]: I1124 23:31:19.785005 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-rkf8f"] Nov 24 23:31:20 crc kubenswrapper[4767]: I1124 23:31:20.324989 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7483195a-b928-4b24-a9bc-014d5e38f355" path="/var/lib/kubelet/pods/7483195a-b928-4b24-a9bc-014d5e38f355/volumes" Nov 24 23:31:20 crc kubenswrapper[4767]: I1124 23:31:20.994716 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-4mvbt"] Nov 24 23:31:20 crc kubenswrapper[4767]: E1124 23:31:20.995456 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7483195a-b928-4b24-a9bc-014d5e38f355" containerName="container-00" Nov 24 23:31:20 crc kubenswrapper[4767]: I1124 23:31:20.995472 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7483195a-b928-4b24-a9bc-014d5e38f355" containerName="container-00" Nov 24 23:31:20 crc kubenswrapper[4767]: I1124 23:31:20.995718 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="7483195a-b928-4b24-a9bc-014d5e38f355" containerName="container-00" Nov 24 23:31:20 crc kubenswrapper[4767]: I1124 23:31:20.996546 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:21 crc kubenswrapper[4767]: I1124 23:31:21.100923 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfts\" (UniqueName: \"kubernetes.io/projected/cb96beef-20d2-4e48-8fb2-70775efe97d5-kube-api-access-vgfts\") pod \"crc-debug-4mvbt\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:21 crc kubenswrapper[4767]: I1124 23:31:21.101027 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb96beef-20d2-4e48-8fb2-70775efe97d5-host\") pod \"crc-debug-4mvbt\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:21 crc kubenswrapper[4767]: I1124 23:31:21.203639 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb96beef-20d2-4e48-8fb2-70775efe97d5-host\") pod \"crc-debug-4mvbt\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:21 crc kubenswrapper[4767]: I1124 23:31:21.203872 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb96beef-20d2-4e48-8fb2-70775efe97d5-host\") pod \"crc-debug-4mvbt\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:21 crc kubenswrapper[4767]: I1124 23:31:21.203926 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgfts\" (UniqueName: \"kubernetes.io/projected/cb96beef-20d2-4e48-8fb2-70775efe97d5-kube-api-access-vgfts\") pod \"crc-debug-4mvbt\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:21 crc kubenswrapper[4767]: I1124 23:31:21.241357 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgfts\" (UniqueName: \"kubernetes.io/projected/cb96beef-20d2-4e48-8fb2-70775efe97d5-kube-api-access-vgfts\") pod \"crc-debug-4mvbt\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:21 crc kubenswrapper[4767]: I1124 23:31:21.323824 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:22 crc kubenswrapper[4767]: I1124 23:31:22.192363 4767 generic.go:334] "Generic (PLEG): container finished" podID="cb96beef-20d2-4e48-8fb2-70775efe97d5" containerID="3128cffc95a417452f736ecd2587b43f8ffcb779cde6d0b14decae5fb501d269" exitCode=0 Nov 24 23:31:22 crc kubenswrapper[4767]: I1124 23:31:22.192501 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" event={"ID":"cb96beef-20d2-4e48-8fb2-70775efe97d5","Type":"ContainerDied","Data":"3128cffc95a417452f736ecd2587b43f8ffcb779cde6d0b14decae5fb501d269"} Nov 24 23:31:22 crc kubenswrapper[4767]: I1124 23:31:22.192790 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" event={"ID":"cb96beef-20d2-4e48-8fb2-70775efe97d5","Type":"ContainerStarted","Data":"999d33736ac39d4663fa4f0a4221100eb6c1490c0a461838b73699862681f2d8"} Nov 24 23:31:22 crc kubenswrapper[4767]: I1124 23:31:22.269490 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-4mvbt"] Nov 24 23:31:22 crc kubenswrapper[4767]: I1124 23:31:22.280954 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xf7h4/crc-debug-4mvbt"] Nov 24 23:31:23 crc kubenswrapper[4767]: I1124 23:31:23.338690 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:23 crc kubenswrapper[4767]: I1124 23:31:23.486021 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgfts\" (UniqueName: \"kubernetes.io/projected/cb96beef-20d2-4e48-8fb2-70775efe97d5-kube-api-access-vgfts\") pod \"cb96beef-20d2-4e48-8fb2-70775efe97d5\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " Nov 24 23:31:23 crc kubenswrapper[4767]: I1124 23:31:23.486341 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb96beef-20d2-4e48-8fb2-70775efe97d5-host\") pod \"cb96beef-20d2-4e48-8fb2-70775efe97d5\" (UID: \"cb96beef-20d2-4e48-8fb2-70775efe97d5\") " Nov 24 23:31:23 crc kubenswrapper[4767]: I1124 23:31:23.488996 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb96beef-20d2-4e48-8fb2-70775efe97d5-host" (OuterVolumeSpecName: "host") pod "cb96beef-20d2-4e48-8fb2-70775efe97d5" (UID: "cb96beef-20d2-4e48-8fb2-70775efe97d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 23:31:23 crc kubenswrapper[4767]: I1124 23:31:23.492810 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb96beef-20d2-4e48-8fb2-70775efe97d5-kube-api-access-vgfts" (OuterVolumeSpecName: "kube-api-access-vgfts") pod "cb96beef-20d2-4e48-8fb2-70775efe97d5" (UID: "cb96beef-20d2-4e48-8fb2-70775efe97d5"). InnerVolumeSpecName "kube-api-access-vgfts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:31:23 crc kubenswrapper[4767]: I1124 23:31:23.589096 4767 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb96beef-20d2-4e48-8fb2-70775efe97d5-host\") on node \"crc\" DevicePath \"\"" Nov 24 23:31:23 crc kubenswrapper[4767]: I1124 23:31:23.589152 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgfts\" (UniqueName: \"kubernetes.io/projected/cb96beef-20d2-4e48-8fb2-70775efe97d5-kube-api-access-vgfts\") on node \"crc\" DevicePath \"\"" Nov 24 23:31:24 crc kubenswrapper[4767]: I1124 23:31:24.217384 4767 scope.go:117] "RemoveContainer" containerID="3128cffc95a417452f736ecd2587b43f8ffcb779cde6d0b14decae5fb501d269" Nov 24 23:31:24 crc kubenswrapper[4767]: I1124 23:31:24.217476 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/crc-debug-4mvbt" Nov 24 23:31:24 crc kubenswrapper[4767]: I1124 23:31:24.326071 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb96beef-20d2-4e48-8fb2-70775efe97d5" path="/var/lib/kubelet/pods/cb96beef-20d2-4e48-8fb2-70775efe97d5/volumes" Nov 24 23:31:46 crc kubenswrapper[4767]: I1124 23:31:46.876436 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9d666dcfd-kpjw6_521b6c97-0928-488c-a85c-0b2e777cae87/barbican-api/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.093399 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9d666dcfd-kpjw6_521b6c97-0928-488c-a85c-0b2e777cae87/barbican-api-log/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.096122 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c6fc47588-98bn5_13440493-b7a7-40a6-9de1-e375ae1c8404/barbican-keystone-listener/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.188335 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c6fc47588-98bn5_13440493-b7a7-40a6-9de1-e375ae1c8404/barbican-keystone-listener-log/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.294258 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-dbc6679f5-nfj96_bc658137-f491-4e87-bdaa-cdc34f59a3a9/barbican-worker/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.317919 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-dbc6679f5-nfj96_bc658137-f491-4e87-bdaa-cdc34f59a3a9/barbican-worker-log/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.522462 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-g4grb_4f4e8bd7-4b90-4d32-b3f3-36011d7820bc/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.630519 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0be11d4b-9b77-43f3-9085-9b8ec61f3018/ceilometer-central-agent/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.650285 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0be11d4b-9b77-43f3-9085-9b8ec61f3018/ceilometer-notification-agent/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.732500 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0be11d4b-9b77-43f3-9085-9b8ec61f3018/sg-core/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.760232 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0be11d4b-9b77-43f3-9085-9b8ec61f3018/proxy-httpd/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.953571 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11/cinder-api/0.log" Nov 24 23:31:47 crc kubenswrapper[4767]: I1124 23:31:47.999095 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a667ac0d-ac24-4ba9-ac1c-e35e32fa1c11/cinder-api-log/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.128625 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c71dd846-b62a-4f53-aa40-7c55462b2a15/cinder-scheduler/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.242002 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c71dd846-b62a-4f53-aa40-7c55462b2a15/probe/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.264832 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-cjz45_b8b98bcc-b8b9-4846-9881-398282f309f1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.468999 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-8l87d_5add018b-72c6-4331-84df-96eac612f7fe/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.491961 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d68fbfdc-ssw6j_7c337669-c5dd-4162-a7cb-a38a0cd86dbe/init/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.654907 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d68fbfdc-ssw6j_7c337669-c5dd-4162-a7cb-a38a0cd86dbe/init/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.702576 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-67wdz_b9c77001-7f38-42a1-9515-7fbe495d2577/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.826942 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d68fbfdc-ssw6j_7c337669-c5dd-4162-a7cb-a38a0cd86dbe/dnsmasq-dns/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.940887 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_839aba43-26fd-43cc-a67d-c7069f0a3f30/glance-httpd/0.log" Nov 24 23:31:48 crc kubenswrapper[4767]: I1124 23:31:48.970309 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_839aba43-26fd-43cc-a67d-c7069f0a3f30/glance-log/0.log" Nov 24 23:31:49 crc kubenswrapper[4767]: I1124 23:31:49.182712 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_51f52fc1-4ddc-46a5-81a4-f1a6330b86e2/glance-log/0.log" Nov 24 23:31:49 crc kubenswrapper[4767]: I1124 23:31:49.240362 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_51f52fc1-4ddc-46a5-81a4-f1a6330b86e2/glance-httpd/0.log" Nov 24 23:31:49 crc kubenswrapper[4767]: I1124 23:31:49.503813 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-567c96d68-4rmbm_f3a751ba-fb23-4cd3-a1f7-2c843e04ab47/horizon/0.log" Nov 24 23:31:49 crc kubenswrapper[4767]: I1124 23:31:49.608115 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tfwtq_dbe64ab9-7ff6-4ce8-8d48-cfb16f848a5f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:49 crc kubenswrapper[4767]: I1124 23:31:49.752367 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lk86q_ee9d91d5-b6b0-4376-b65e-b211504121e8/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:50 crc kubenswrapper[4767]: I1124 23:31:50.125884 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400361-ffqrr_408f9b2f-5719-4224-859e-d583726e92aa/keystone-cron/0.log" Nov 24 23:31:50 crc kubenswrapper[4767]: I1124 23:31:50.351801 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400421-q74h2_98fc4d42-16a8-4051-afa0-e47332ee72bf/keystone-cron/0.log" Nov 24 23:31:50 crc kubenswrapper[4767]: I1124 23:31:50.517775 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_23380850-3126-4e93-b869-0da00c51d57c/kube-state-metrics/0.log" Nov 24 23:31:50 crc kubenswrapper[4767]: I1124 23:31:50.594929 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7574cdc49f-grwcx_962cbac3-dc40-4b91-a5ca-69c6fb9ad020/keystone-api/0.log" Nov 24 23:31:50 crc kubenswrapper[4767]: I1124 23:31:50.595475 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-567c96d68-4rmbm_f3a751ba-fb23-4cd3-a1f7-2c843e04ab47/horizon-log/0.log" Nov 24 23:31:50 crc kubenswrapper[4767]: I1124 23:31:50.722146 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-kl6w2_12cea285-00cd-40e4-b751-75563f414f33/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:51 crc kubenswrapper[4767]: I1124 23:31:51.110933 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ckrz7_73ed4c4b-18b6-4d28-b0b2-f1a480963c46/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:51 crc kubenswrapper[4767]: I1124 23:31:51.143715 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c4646f4f-mnjlq_13d6c00a-8e06-47a6-b1c7-f32681fd7ddd/neutron-httpd/0.log" Nov 24 23:31:51 crc kubenswrapper[4767]: I1124 23:31:51.196386 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c4646f4f-mnjlq_13d6c00a-8e06-47a6-b1c7-f32681fd7ddd/neutron-api/0.log" Nov 24 23:31:51 crc kubenswrapper[4767]: I1124 23:31:51.792967 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_04008f61-32ce-4326-b12d-056878a5479f/nova-cell0-conductor-conductor/0.log" Nov 24 23:31:52 crc kubenswrapper[4767]: I1124 23:31:52.067388 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_41bdf82d-f1b9-4575-a36b-32d5617b9562/nova-cell1-conductor-conductor/0.log" Nov 24 23:31:52 crc kubenswrapper[4767]: I1124 23:31:52.383922 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1f689eaf-9606-42fc-98cf-d69f82676ecf/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 23:31:52 crc kubenswrapper[4767]: I1124 23:31:52.675423 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-hw2k2_4939e57b-c314-4065-a96f-e111bd32f3e2/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:52 crc kubenswrapper[4767]: I1124 23:31:52.962373 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0af582ee-37f6-41fa-882e-a11eab5c4f29/nova-metadata-log/0.log" Nov 24 23:31:53 crc kubenswrapper[4767]: I1124 23:31:53.067485 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_102497e1-cf13-4ed2-8976-ac528dbc6c82/nova-api-log/0.log" Nov 24 23:31:53 crc kubenswrapper[4767]: I1124 23:31:53.558985 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1eb63f2d-c413-4bb0-9c31-3c7871a80319/nova-scheduler-scheduler/0.log" Nov 24 23:31:53 crc kubenswrapper[4767]: I1124 23:31:53.597044 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_102497e1-cf13-4ed2-8976-ac528dbc6c82/nova-api-api/0.log" Nov 24 23:31:53 crc kubenswrapper[4767]: I1124 23:31:53.631388 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b5a55be5-98af-48c4-800f-1595cb7e1959/mysql-bootstrap/0.log" Nov 24 23:31:53 crc kubenswrapper[4767]: I1124 23:31:53.813501 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b5a55be5-98af-48c4-800f-1595cb7e1959/mysql-bootstrap/0.log" Nov 24 23:31:53 crc kubenswrapper[4767]: I1124 23:31:53.860570 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b5a55be5-98af-48c4-800f-1595cb7e1959/galera/0.log" Nov 24 23:31:54 crc kubenswrapper[4767]: I1124 23:31:54.011716 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e2dc17c-c088-4182-8695-1c09ee22aa06/mysql-bootstrap/0.log" Nov 24 23:31:54 crc kubenswrapper[4767]: I1124 23:31:54.223832 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e2dc17c-c088-4182-8695-1c09ee22aa06/mysql-bootstrap/0.log" Nov 24 23:31:54 crc kubenswrapper[4767]: I1124 23:31:54.231944 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e2dc17c-c088-4182-8695-1c09ee22aa06/galera/0.log" Nov 24 23:31:54 crc kubenswrapper[4767]: I1124 23:31:54.418495 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9d2b6ae4-687d-4fa8-b641-0ddbbf3df57c/openstackclient/0.log" Nov 24 23:31:54 crc kubenswrapper[4767]: I1124 23:31:54.482618 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ntjb8_b359e7d5-b708-4bf2-9017-48099ff8e287/openstack-network-exporter/0.log" Nov 24 23:31:54 crc kubenswrapper[4767]: I1124 23:31:54.670119 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ngft4_6e7e218a-3550-499e-8337-5940f98af41c/ovn-controller/0.log" Nov 24 23:31:54 crc kubenswrapper[4767]: I1124 23:31:54.942112 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6bq9m_336d57cd-046c-436a-a596-69890001522f/ovsdb-server-init/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.069290 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6bq9m_336d57cd-046c-436a-a596-69890001522f/ovs-vswitchd/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.097524 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6bq9m_336d57cd-046c-436a-a596-69890001522f/ovsdb-server-init/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.142709 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6bq9m_336d57cd-046c-436a-a596-69890001522f/ovsdb-server/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.340323 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6qclq_1de492fb-e45f-40d5-8115-0c5a9ae9e49a/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.501502 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_00633903-4662-43b6-a25f-0b18b9cdf455/openstack-network-exporter/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.561327 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_00633903-4662-43b6-a25f-0b18b9cdf455/ovn-northd/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.653062 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0af582ee-37f6-41fa-882e-a11eab5c4f29/nova-metadata-metadata/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.718159 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4814045f-5f97-427e-a1bb-3aa438fc2e5d/openstack-network-exporter/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.774357 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4814045f-5f97-427e-a1bb-3aa438fc2e5d/ovsdbserver-nb/0.log" Nov 24 23:31:55 crc kubenswrapper[4767]: I1124 23:31:55.872246 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a77426c-8a5f-427c-accc-fa0de1270f9c/openstack-network-exporter/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:55.999533 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a77426c-8a5f-427c-accc-fa0de1270f9c/ovsdbserver-sb/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.303971 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76349a53-1d05-411f-9af2-0833bc0667b1/init-config-reloader/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.337773 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d6f9dff64-d2zkv_4e3198f8-260a-4ccd-a470-100aa54835c0/placement-api/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.463797 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d6f9dff64-d2zkv_4e3198f8-260a-4ccd-a470-100aa54835c0/placement-log/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.486374 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76349a53-1d05-411f-9af2-0833bc0667b1/init-config-reloader/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.532936 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76349a53-1d05-411f-9af2-0833bc0667b1/config-reloader/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.555917 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76349a53-1d05-411f-9af2-0833bc0667b1/prometheus/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.699911 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54f86c38-24f7-427b-9b8c-4f4505f7fa1d/setup-container/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.707298 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76349a53-1d05-411f-9af2-0833bc0667b1/thanos-sidecar/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.895398 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54f86c38-24f7-427b-9b8c-4f4505f7fa1d/setup-container/0.log" Nov 24 23:31:56 crc kubenswrapper[4767]: I1124 23:31:56.931741 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54f86c38-24f7-427b-9b8c-4f4505f7fa1d/rabbitmq/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.063572 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6e04e8f5-1d91-474f-b67b-d8fa24e00b90/setup-container/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.345174 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6e04e8f5-1d91-474f-b67b-d8fa24e00b90/setup-container/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.366735 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mjsrk_9ee9a8bf-0bd8-49fc-8421-1805014adfac/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.376605 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6e04e8f5-1d91-474f-b67b-d8fa24e00b90/rabbitmq/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.577788 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-6jl5j_6a70e2b9-04fb-4374-aacd-bdfb2cd8fd11/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.734819 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vdqqd_256a937c-fb13-42bf-b69f-140b9d8bad1d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.809963 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-44g44_0757ad1e-fda9-4955-8b22-4de26be15b37/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:57 crc kubenswrapper[4767]: I1124 23:31:57.970873 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jf5rl_70cec17a-2bbb-4bf2-9236-5848efc6689c/ssh-known-hosts-edpm-deployment/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.190952 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-64b748f489-f8d4f_92516271-3ccd-4f57-866d-7242ab4b50c6/proxy-server/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.304696 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pfdzc_084fdc28-199d-44c7-93c8-67792c6f4829/swift-ring-rebalance/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.417912 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-64b748f489-f8d4f_92516271-3ccd-4f57-866d-7242ab4b50c6/proxy-httpd/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.481886 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/account-reaper/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.493723 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/account-auditor/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.653250 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/account-server/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.663531 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/container-auditor/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.676317 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/account-replicator/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.714029 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/container-replicator/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.861041 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/container-server/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.882139 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/container-updater/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.885023 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/object-auditor/0.log" Nov 24 23:31:58 crc kubenswrapper[4767]: I1124 23:31:58.921912 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/object-expirer/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.052138 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/object-server/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.128165 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/object-replicator/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.149163 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/object-updater/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.153423 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/rsync/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.247625 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db319bac-943e-4baa-afb0-2089513c8935/swift-recon-cron/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.481842 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vvdkt_4712a89f-30ee-4a70-99f4-8765c454f318/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.685487 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-qfb6s_ce46524c-1a5f-4fb5-afc9-f3c46fa33135/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:31:59 crc kubenswrapper[4767]: I1124 23:31:59.912895 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_91b558f1-e51a-4b3d-b96f-bbe1cc5e6ab3/tempest-tests-tempest-tests-runner/0.log" Nov 24 23:32:00 crc kubenswrapper[4767]: I1124 23:32:00.552353 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_63be4b34-e65f-4045-8223-6f19324c761b/watcher-applier/0.log" Nov 24 23:32:00 crc kubenswrapper[4767]: I1124 23:32:00.702955 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_19d92504-eb02-4711-a860-bed97da288e0/watcher-api-log/0.log" Nov 24 23:32:01 crc kubenswrapper[4767]: I1124 23:32:01.702952 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_96538813-044f-45a6-b596-07f9dec093c6/watcher-decision-engine/0.log" Nov 24 23:32:04 crc kubenswrapper[4767]: I1124 23:32:04.001561 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c94f1692-e48b-43d8-9694-1d54ba3e8f41/memcached/0.log" Nov 24 23:32:04 crc kubenswrapper[4767]: I1124 23:32:04.617711 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_19d92504-eb02-4711-a860-bed97da288e0/watcher-api/0.log" Nov 24 23:32:05 crc kubenswrapper[4767]: I1124 23:32:05.482201 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:32:05 crc kubenswrapper[4767]: I1124 23:32:05.482327 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.335176 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn_244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb/util/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.503545 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn_244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb/util/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.504907 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn_244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb/pull/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.528203 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn_244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb/pull/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.737930 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn_244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb/util/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.751821 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn_244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb/extract/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.765129 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73899cb78c31d26b1d5103b74dd1ea4a1da4780a9bc6d6028c6cf54295f42jn_244c3f1e-2d0b-41fb-b16a-3bb28cb9b4bb/pull/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.916722 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-z6h87_44564b48-f353-4b3f-a0b7-b42ecd1bf838/kube-rbac-proxy/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.978371 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-hxzx7_1cb193ac-a6d0-4981-91b8-234d77ab2cd7/kube-rbac-proxy/0.log" Nov 24 23:32:25 crc kubenswrapper[4767]: I1124 23:32:25.997785 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-z6h87_44564b48-f353-4b3f-a0b7-b42ecd1bf838/manager/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.132384 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-hxzx7_1cb193ac-a6d0-4981-91b8-234d77ab2cd7/manager/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.209245 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-c42hw_5abc7b42-2e06-4722-b3e4-aab9de868251/kube-rbac-proxy/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.212042 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-c42hw_5abc7b42-2e06-4722-b3e4-aab9de868251/manager/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.373554 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-ns9km_e8cfe9d6-3aba-44af-9dbc-679d34dc98d0/kube-rbac-proxy/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.545915 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-ns9km_e8cfe9d6-3aba-44af-9dbc-679d34dc98d0/manager/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.583367 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-wtwkg_945744e6-8179-45cb-a020-de9b73fa89a1/kube-rbac-proxy/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.600017 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-wtwkg_945744e6-8179-45cb-a020-de9b73fa89a1/manager/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.706127 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-7pth6_78ad5af3-1937-484b-bd41-9a7ac9d09db3/kube-rbac-proxy/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.806005 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-7pth6_78ad5af3-1937-484b-bd41-9a7ac9d09db3/manager/0.log" Nov 24 23:32:26 crc kubenswrapper[4767]: I1124 23:32:26.870417 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-wln68_7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8/kube-rbac-proxy/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.042616 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-rmwtn_a5a1f537-9c37-40a5-9f2f-a9ec762ca458/kube-rbac-proxy/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.045153 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-wln68_7f35807c-54db-4e6e-aeb1-8f8b15b6cbb8/manager/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.102011 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-rmwtn_a5a1f537-9c37-40a5-9f2f-a9ec762ca458/manager/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.213044 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-zdlcp_45530d57-164d-48f7-89e1-0a0f85ccb029/kube-rbac-proxy/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.292484 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-zdlcp_45530d57-164d-48f7-89e1-0a0f85ccb029/manager/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.408193 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-9c4l8_c4266ab7-4886-4015-9a87-6454fc59e9c5/manager/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.417511 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-9c4l8_c4266ab7-4886-4015-9a87-6454fc59e9c5/kube-rbac-proxy/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.506194 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-ghpkb_acb0f017-b32b-4d0a-98b5-bd8d4db084ea/kube-rbac-proxy/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.610493 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-ghpkb_acb0f017-b32b-4d0a-98b5-bd8d4db084ea/manager/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.688331 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-wmpbx_97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8/kube-rbac-proxy/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.752085 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-wmpbx_97bfe853-33bd-4dfc-b7bf-f9c82d9d0ba8/manager/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.832854 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-jjk4x_b7220fb1-add2-490e-9a22-09ca48f0de97/kube-rbac-proxy/0.log" Nov 24 23:32:27 crc kubenswrapper[4767]: I1124 23:32:27.992065 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-jjk4x_b7220fb1-add2-490e-9a22-09ca48f0de97/manager/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.006730 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-vdr7z_bde0dfef-808a-4851-81a8-968847586652/kube-rbac-proxy/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.037569 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-vdr7z_bde0dfef-808a-4851-81a8-968847586652/manager/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.164631 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc_1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d/kube-rbac-proxy/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.181018 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-7d7sc_1d74c7aa-02a6-4c57-95d5-7d2b62d7dc9d/manager/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.496289 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-55d996bbb7-zpgcg_a5bf8969-1c9c-4141-bcc7-fcdb88508516/operator/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.627370 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sfphm_069a00af-68eb-41b7-9bcf-5209562d25d8/registry-server/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.785431 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-g7fnm_ea0b61d0-e20f-40eb-a3a8-329ff271f057/kube-rbac-proxy/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.872035 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-g7fnm_ea0b61d0-e20f-40eb-a3a8-329ff271f057/manager/0.log" Nov 24 23:32:28 crc kubenswrapper[4767]: I1124 23:32:28.957541 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-7v5f9_fb5e8630-50f8-4d2c-a77a-d23b6441386a/kube-rbac-proxy/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.049000 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-7v5f9_fb5e8630-50f8-4d2c-a77a-d23b6441386a/manager/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.180514 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4sfq7_52982ab5-3f6d-47fa-baf9-c6957e170ffe/operator/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.292140 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-7bvtn_0266992d-7010-4fa3-9a94-2a7ab457f4ca/kube-rbac-proxy/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.334201 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-7bvtn_0266992d-7010-4fa3-9a94-2a7ab457f4ca/manager/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.409978 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-hpv5b_aa98c97b-2d21-481f-9ddf-3e5adce9f626/kube-rbac-proxy/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.592760 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-2nr8k_ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b/kube-rbac-proxy/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.597868 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-2nr8k_ce10027a-4cbe-4cb3-bde7-a3efa2ec4c6b/manager/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.625984 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d749b69b6-ns4rd_0ac691b7-c7ad-467b-b4f2-46e9d52c450f/manager/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.733195 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-hpv5b_aa98c97b-2d21-481f-9ddf-3e5adce9f626/manager/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.773970 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5c96f79b7c-4msp7_0265238d-c56a-428f-a359-a2e9cff33593/kube-rbac-proxy/0.log" Nov 24 23:32:29 crc kubenswrapper[4767]: I1124 23:32:29.861463 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5c96f79b7c-4msp7_0265238d-c56a-428f-a359-a2e9cff33593/manager/0.log" Nov 24 23:32:35 crc kubenswrapper[4767]: I1124 23:32:35.481644 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:32:35 crc kubenswrapper[4767]: I1124 23:32:35.482199 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:32:46 crc kubenswrapper[4767]: I1124 23:32:46.846604 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mssg2_e8d6ce66-68d1-45fd-9e54-6baedf990e1d/control-plane-machine-set-operator/0.log" Nov 24 23:32:47 crc kubenswrapper[4767]: I1124 23:32:47.022285 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mx2n5_45b0fcf9-821d-4504-acf3-2d1cfb83d093/machine-api-operator/0.log" Nov 24 23:32:47 crc kubenswrapper[4767]: I1124 23:32:47.022577 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mx2n5_45b0fcf9-821d-4504-acf3-2d1cfb83d093/kube-rbac-proxy/0.log" Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.851761 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hh4df"] Nov 24 23:32:48 crc kubenswrapper[4767]: E1124 23:32:48.852789 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb96beef-20d2-4e48-8fb2-70775efe97d5" containerName="container-00" Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.852806 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb96beef-20d2-4e48-8fb2-70775efe97d5" containerName="container-00" Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.853120 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb96beef-20d2-4e48-8fb2-70775efe97d5" containerName="container-00" Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.854753 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.869963 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hh4df"] Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.980723 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlvh\" (UniqueName: \"kubernetes.io/projected/75efd199-8f88-4a24-9020-465749614adb-kube-api-access-twlvh\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.981389 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-utilities\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:48 crc kubenswrapper[4767]: I1124 23:32:48.981449 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-catalog-content\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.083194 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-utilities\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.083570 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-catalog-content\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.083649 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlvh\" (UniqueName: \"kubernetes.io/projected/75efd199-8f88-4a24-9020-465749614adb-kube-api-access-twlvh\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.083850 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-utilities\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.084093 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-catalog-content\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.101927 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlvh\" (UniqueName: \"kubernetes.io/projected/75efd199-8f88-4a24-9020-465749614adb-kube-api-access-twlvh\") pod \"community-operators-hh4df\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.173358 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:49 crc kubenswrapper[4767]: I1124 23:32:49.735528 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hh4df"] Nov 24 23:32:50 crc kubenswrapper[4767]: I1124 23:32:50.016207 4767 generic.go:334] "Generic (PLEG): container finished" podID="75efd199-8f88-4a24-9020-465749614adb" containerID="312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e" exitCode=0 Nov 24 23:32:50 crc kubenswrapper[4767]: I1124 23:32:50.016364 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh4df" event={"ID":"75efd199-8f88-4a24-9020-465749614adb","Type":"ContainerDied","Data":"312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e"} Nov 24 23:32:50 crc kubenswrapper[4767]: I1124 23:32:50.016586 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh4df" event={"ID":"75efd199-8f88-4a24-9020-465749614adb","Type":"ContainerStarted","Data":"e2699594613999b66a0a7a02583fcded375f695b9a63037144a030de27e4a35c"} Nov 24 23:32:51 crc kubenswrapper[4767]: I1124 23:32:51.029774 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh4df" event={"ID":"75efd199-8f88-4a24-9020-465749614adb","Type":"ContainerStarted","Data":"4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e"} Nov 24 23:32:53 crc kubenswrapper[4767]: I1124 23:32:53.052945 4767 generic.go:334] "Generic (PLEG): container finished" podID="75efd199-8f88-4a24-9020-465749614adb" containerID="4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e" exitCode=0 Nov 24 23:32:53 crc kubenswrapper[4767]: I1124 23:32:53.053039 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh4df" event={"ID":"75efd199-8f88-4a24-9020-465749614adb","Type":"ContainerDied","Data":"4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e"} Nov 24 23:32:54 crc kubenswrapper[4767]: I1124 23:32:54.086011 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh4df" event={"ID":"75efd199-8f88-4a24-9020-465749614adb","Type":"ContainerStarted","Data":"c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579"} Nov 24 23:32:54 crc kubenswrapper[4767]: I1124 23:32:54.116299 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hh4df" podStartSLOduration=2.697603568 podStartE2EDuration="6.1162593s" podCreationTimestamp="2025-11-24 23:32:48 +0000 UTC" firstStartedPulling="2025-11-24 23:32:50.018967272 +0000 UTC m=+6852.935950674" lastFinishedPulling="2025-11-24 23:32:53.437623004 +0000 UTC m=+6856.354606406" observedRunningTime="2025-11-24 23:32:54.112839314 +0000 UTC m=+6857.029822696" watchObservedRunningTime="2025-11-24 23:32:54.1162593 +0000 UTC m=+6857.033242682" Nov 24 23:32:59 crc kubenswrapper[4767]: I1124 23:32:59.174162 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:59 crc kubenswrapper[4767]: I1124 23:32:59.175059 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:32:59 crc kubenswrapper[4767]: I1124 23:32:59.242758 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:33:00 crc kubenswrapper[4767]: I1124 23:33:00.214070 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:33:00 crc kubenswrapper[4767]: I1124 23:33:00.266428 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hh4df"] Nov 24 23:33:00 crc kubenswrapper[4767]: I1124 23:33:00.640220 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-4snnn_8b86c481-27ba-4661-9456-6d0c2c37e707/cert-manager-controller/0.log" Nov 24 23:33:00 crc kubenswrapper[4767]: I1124 23:33:00.801873 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-pj7sv_ec144c5d-54dc-44b7-ab5a-e79db52a31d4/cert-manager-cainjector/0.log" Nov 24 23:33:00 crc kubenswrapper[4767]: I1124 23:33:00.833589 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-9rlpk_1fef5731-47c0-449f-b861-14eb7d3bbb32/cert-manager-webhook/0.log" Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.170196 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hh4df" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="registry-server" containerID="cri-o://c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579" gracePeriod=2 Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.667010 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.784862 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-utilities\") pod \"75efd199-8f88-4a24-9020-465749614adb\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.784944 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-catalog-content\") pod \"75efd199-8f88-4a24-9020-465749614adb\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.785004 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twlvh\" (UniqueName: \"kubernetes.io/projected/75efd199-8f88-4a24-9020-465749614adb-kube-api-access-twlvh\") pod \"75efd199-8f88-4a24-9020-465749614adb\" (UID: \"75efd199-8f88-4a24-9020-465749614adb\") " Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.785798 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-utilities" (OuterVolumeSpecName: "utilities") pod "75efd199-8f88-4a24-9020-465749614adb" (UID: "75efd199-8f88-4a24-9020-465749614adb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.791560 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75efd199-8f88-4a24-9020-465749614adb-kube-api-access-twlvh" (OuterVolumeSpecName: "kube-api-access-twlvh") pod "75efd199-8f88-4a24-9020-465749614adb" (UID: "75efd199-8f88-4a24-9020-465749614adb"). InnerVolumeSpecName "kube-api-access-twlvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.855756 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75efd199-8f88-4a24-9020-465749614adb" (UID: "75efd199-8f88-4a24-9020-465749614adb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.887979 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.888022 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75efd199-8f88-4a24-9020-465749614adb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:33:02 crc kubenswrapper[4767]: I1124 23:33:02.888041 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twlvh\" (UniqueName: \"kubernetes.io/projected/75efd199-8f88-4a24-9020-465749614adb-kube-api-access-twlvh\") on node \"crc\" DevicePath \"\"" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.182431 4767 generic.go:334] "Generic (PLEG): container finished" podID="75efd199-8f88-4a24-9020-465749614adb" containerID="c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579" exitCode=0 Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.182488 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh4df" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.182497 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh4df" event={"ID":"75efd199-8f88-4a24-9020-465749614adb","Type":"ContainerDied","Data":"c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579"} Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.182813 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh4df" event={"ID":"75efd199-8f88-4a24-9020-465749614adb","Type":"ContainerDied","Data":"e2699594613999b66a0a7a02583fcded375f695b9a63037144a030de27e4a35c"} Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.182837 4767 scope.go:117] "RemoveContainer" containerID="c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.208170 4767 scope.go:117] "RemoveContainer" containerID="4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.224520 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hh4df"] Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.237960 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hh4df"] Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.265504 4767 scope.go:117] "RemoveContainer" containerID="312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.292096 4767 scope.go:117] "RemoveContainer" containerID="c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579" Nov 24 23:33:03 crc kubenswrapper[4767]: E1124 23:33:03.292455 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579\": container with ID starting with c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579 not found: ID does not exist" containerID="c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.292494 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579"} err="failed to get container status \"c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579\": rpc error: code = NotFound desc = could not find container \"c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579\": container with ID starting with c90d0fe0b58c2ce3c74645a21bc8276df320411d1b38879ff51dd00ba2de3579 not found: ID does not exist" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.292528 4767 scope.go:117] "RemoveContainer" containerID="4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e" Nov 24 23:33:03 crc kubenswrapper[4767]: E1124 23:33:03.292822 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e\": container with ID starting with 4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e not found: ID does not exist" containerID="4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.292850 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e"} err="failed to get container status \"4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e\": rpc error: code = NotFound desc = could not find container \"4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e\": container with ID starting with 4985d7edcf13d82c00d869f43213c6cf1d1787827c708c3f9bfaf71be1e1547e not found: ID does not exist" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.292875 4767 scope.go:117] "RemoveContainer" containerID="312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e" Nov 24 23:33:03 crc kubenswrapper[4767]: E1124 23:33:03.293140 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e\": container with ID starting with 312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e not found: ID does not exist" containerID="312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e" Nov 24 23:33:03 crc kubenswrapper[4767]: I1124 23:33:03.293160 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e"} err="failed to get container status \"312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e\": rpc error: code = NotFound desc = could not find container \"312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e\": container with ID starting with 312e91398fefd524feec8a258d743035a631a4b5ac17791a02702d54c89edc3e not found: ID does not exist" Nov 24 23:33:04 crc kubenswrapper[4767]: I1124 23:33:04.334824 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75efd199-8f88-4a24-9020-465749614adb" path="/var/lib/kubelet/pods/75efd199-8f88-4a24-9020-465749614adb/volumes" Nov 24 23:33:05 crc kubenswrapper[4767]: I1124 23:33:05.481541 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:33:05 crc kubenswrapper[4767]: I1124 23:33:05.481602 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:33:05 crc kubenswrapper[4767]: I1124 23:33:05.481651 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 23:33:05 crc kubenswrapper[4767]: I1124 23:33:05.482552 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c7cc72074d182d5318650835206882a6f9a9af381df20391df98140e9145d85"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:33:05 crc kubenswrapper[4767]: I1124 23:33:05.482622 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://5c7cc72074d182d5318650835206882a6f9a9af381df20391df98140e9145d85" gracePeriod=600 Nov 24 23:33:06 crc kubenswrapper[4767]: I1124 23:33:06.217451 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="5c7cc72074d182d5318650835206882a6f9a9af381df20391df98140e9145d85" exitCode=0 Nov 24 23:33:06 crc kubenswrapper[4767]: I1124 23:33:06.218180 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"5c7cc72074d182d5318650835206882a6f9a9af381df20391df98140e9145d85"} Nov 24 23:33:06 crc kubenswrapper[4767]: I1124 23:33:06.218207 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f"} Nov 24 23:33:06 crc kubenswrapper[4767]: I1124 23:33:06.218223 4767 scope.go:117] "RemoveContainer" containerID="ea255b6a38ee86d60098ddc62428f42ef6af72c807df7bf3926ac70e0939f200" Nov 24 23:33:14 crc kubenswrapper[4767]: I1124 23:33:14.141574 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-4j6q6_1b33b046-047a-4fb3-a8f7-5878cb5b67a4/nmstate-console-plugin/0.log" Nov 24 23:33:14 crc kubenswrapper[4767]: I1124 23:33:14.320975 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tn5df_ec2443e8-31e2-462e-8228-20b836a0293b/nmstate-handler/0.log" Nov 24 23:33:14 crc kubenswrapper[4767]: I1124 23:33:14.405035 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-vc96f_03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14/kube-rbac-proxy/0.log" Nov 24 23:33:14 crc kubenswrapper[4767]: I1124 23:33:14.435146 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-vc96f_03c475f5-6bf2-4d5b-9d4a-9f7e9a796a14/nmstate-metrics/0.log" Nov 24 23:33:14 crc kubenswrapper[4767]: I1124 23:33:14.534856 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-gsqpw_7c8fb1d2-5046-4cd5-a080-9f1d2fbf95dc/nmstate-operator/0.log" Nov 24 23:33:14 crc kubenswrapper[4767]: I1124 23:33:14.634553 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-vw4tc_949ff973-0dba-43c3-9797-a11b5df07b78/nmstate-webhook/0.log" Nov 24 23:33:29 crc kubenswrapper[4767]: I1124 23:33:29.988958 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5z6rv_42c8f455-18a7-42b3-ace1-f84396927f3f/controller/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.052471 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5z6rv_42c8f455-18a7-42b3-ace1-f84396927f3f/kube-rbac-proxy/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.227957 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-frr-files/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.375883 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-metrics/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.387227 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-reloader/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.389385 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-frr-files/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.446528 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-reloader/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.558167 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-reloader/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.563373 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-frr-files/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.594029 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-metrics/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.632489 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-metrics/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.789003 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/controller/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.815018 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-reloader/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.817988 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-metrics/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.828583 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/cp-frr-files/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.994218 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/kube-rbac-proxy/0.log" Nov 24 23:33:30 crc kubenswrapper[4767]: I1124 23:33:30.998788 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/frr-metrics/0.log" Nov 24 23:33:31 crc kubenswrapper[4767]: I1124 23:33:31.036043 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/kube-rbac-proxy-frr/0.log" Nov 24 23:33:31 crc kubenswrapper[4767]: I1124 23:33:31.205905 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-7dw7f_a335b09c-eb27-4f81-92fd-c8e8cf54bc29/frr-k8s-webhook-server/0.log" Nov 24 23:33:31 crc kubenswrapper[4767]: I1124 23:33:31.210858 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/reloader/0.log" Nov 24 23:33:31 crc kubenswrapper[4767]: I1124 23:33:31.369868 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7754dcd9b8-4f27l_731cc1e2-6b05-450a-b193-7642ea4674ba/manager/0.log" Nov 24 23:33:31 crc kubenswrapper[4767]: I1124 23:33:31.542295 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-549d689cb8-wpm9x_ef716a61-b638-498b-b9b0-46ce4d9b2a4b/webhook-server/0.log" Nov 24 23:33:31 crc kubenswrapper[4767]: I1124 23:33:31.705573 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rtxm7_fe9b8380-26eb-4029-aff7-25244660b6be/kube-rbac-proxy/0.log" Nov 24 23:33:32 crc kubenswrapper[4767]: I1124 23:33:32.270489 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rtxm7_fe9b8380-26eb-4029-aff7-25244660b6be/speaker/0.log" Nov 24 23:33:32 crc kubenswrapper[4767]: I1124 23:33:32.753940 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9vtpb_027827e3-0a39-466b-9b89-304593d0c558/frr/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.343559 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6_e4522745-8479-4cf2-8703-03433a9be00e/util/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.556849 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6_e4522745-8479-4cf2-8703-03433a9be00e/pull/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.562800 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6_e4522745-8479-4cf2-8703-03433a9be00e/util/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.589342 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6_e4522745-8479-4cf2-8703-03433a9be00e/pull/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.765033 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6_e4522745-8479-4cf2-8703-03433a9be00e/util/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.791326 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6_e4522745-8479-4cf2-8703-03433a9be00e/pull/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.834535 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehnfp6_e4522745-8479-4cf2-8703-03433a9be00e/extract/0.log" Nov 24 23:33:46 crc kubenswrapper[4767]: I1124 23:33:46.953433 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7_abb84f01-f1a5-4197-bac4-b109344281a8/util/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.180695 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7_abb84f01-f1a5-4197-bac4-b109344281a8/pull/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.184386 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7_abb84f01-f1a5-4197-bac4-b109344281a8/util/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.185142 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7_abb84f01-f1a5-4197-bac4-b109344281a8/pull/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.379056 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7_abb84f01-f1a5-4197-bac4-b109344281a8/util/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.408652 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7_abb84f01-f1a5-4197-bac4-b109344281a8/pull/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.412817 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mmhp7_abb84f01-f1a5-4197-bac4-b109344281a8/extract/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.551840 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dk69f_255d133f-2de5-4b7d-a1dc-9091d0bd6580/extract-utilities/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.705691 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dk69f_255d133f-2de5-4b7d-a1dc-9091d0bd6580/extract-content/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.725141 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dk69f_255d133f-2de5-4b7d-a1dc-9091d0bd6580/extract-utilities/0.log" Nov 24 23:33:47 crc kubenswrapper[4767]: I1124 23:33:47.727104 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dk69f_255d133f-2de5-4b7d-a1dc-9091d0bd6580/extract-content/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.104690 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dk69f_255d133f-2de5-4b7d-a1dc-9091d0bd6580/extract-content/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.140512 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dk69f_255d133f-2de5-4b7d-a1dc-9091d0bd6580/extract-utilities/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.276711 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zksv2_29f61f53-2472-439d-929c-29955a7d1849/extract-utilities/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.495007 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zksv2_29f61f53-2472-439d-929c-29955a7d1849/extract-content/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.512527 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zksv2_29f61f53-2472-439d-929c-29955a7d1849/extract-utilities/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.548848 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zksv2_29f61f53-2472-439d-929c-29955a7d1849/extract-content/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.622569 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dk69f_255d133f-2de5-4b7d-a1dc-9091d0bd6580/registry-server/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.785532 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zksv2_29f61f53-2472-439d-929c-29955a7d1849/extract-content/0.log" Nov 24 23:33:48 crc kubenswrapper[4767]: I1124 23:33:48.861981 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zksv2_29f61f53-2472-439d-929c-29955a7d1849/extract-utilities/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.074087 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff_585faaa9-4163-4066-b609-77274cc5a207/util/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.312055 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff_585faaa9-4163-4066-b609-77274cc5a207/util/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.330206 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff_585faaa9-4163-4066-b609-77274cc5a207/pull/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.384285 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff_585faaa9-4163-4066-b609-77274cc5a207/pull/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.568370 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff_585faaa9-4163-4066-b609-77274cc5a207/util/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.607573 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff_585faaa9-4163-4066-b609-77274cc5a207/pull/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.655202 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6nbnff_585faaa9-4163-4066-b609-77274cc5a207/extract/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.728285 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zksv2_29f61f53-2472-439d-929c-29955a7d1849/registry-server/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.802195 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tmp5k_ad09ebd3-c91e-47fc-9f29-6a6acded7085/marketplace-operator/0.log" Nov 24 23:33:49 crc kubenswrapper[4767]: I1124 23:33:49.884037 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lndk9_adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1/extract-utilities/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.034707 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lndk9_adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1/extract-content/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.050475 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lndk9_adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1/extract-content/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.077352 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lndk9_adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1/extract-utilities/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.220263 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lndk9_adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1/extract-utilities/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.266825 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lndk9_adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1/extract-content/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.346676 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pm5z7_116b0d83-f4a6-4033-82fe-a29430d7b576/extract-utilities/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.416316 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lndk9_adc6f6b3-70a5-4fae-a242-9ed75cb3c9b1/registry-server/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.493383 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pm5z7_116b0d83-f4a6-4033-82fe-a29430d7b576/extract-utilities/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.531328 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pm5z7_116b0d83-f4a6-4033-82fe-a29430d7b576/extract-content/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.532642 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pm5z7_116b0d83-f4a6-4033-82fe-a29430d7b576/extract-content/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.689112 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pm5z7_116b0d83-f4a6-4033-82fe-a29430d7b576/extract-utilities/0.log" Nov 24 23:33:50 crc kubenswrapper[4767]: I1124 23:33:50.694536 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pm5z7_116b0d83-f4a6-4033-82fe-a29430d7b576/extract-content/0.log" Nov 24 23:33:51 crc kubenswrapper[4767]: I1124 23:33:51.396506 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pm5z7_116b0d83-f4a6-4033-82fe-a29430d7b576/registry-server/0.log" Nov 24 23:34:04 crc kubenswrapper[4767]: I1124 23:34:04.583953 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-g49md_877151d2-38aa-421e-9335-dc8ef0f8dfc6/prometheus-operator/0.log" Nov 24 23:34:04 crc kubenswrapper[4767]: I1124 23:34:04.786617 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b89899fcf-p7l8t_cf6f3541-d121-4bbe-8b0b-969a4c0031a6/prometheus-operator-admission-webhook/0.log" Nov 24 23:34:04 crc kubenswrapper[4767]: I1124 23:34:04.824428 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b89899fcf-qpxh9_292b7555-3ea7-43a0-a123-d8c03d0181f4/prometheus-operator-admission-webhook/0.log" Nov 24 23:34:04 crc kubenswrapper[4767]: I1124 23:34:04.965581 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-8759g_04302788-c622-42ea-b5a6-eff1c0afd3ce/operator/0.log" Nov 24 23:34:05 crc kubenswrapper[4767]: I1124 23:34:05.024579 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-wwcbf_29641d4f-33cd-4116-a496-0767a54e5403/perses-operator/0.log" Nov 24 23:34:44 crc kubenswrapper[4767]: I1124 23:34:44.648180 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 23:34:44 crc kubenswrapper[4767]: I1124 23:34:44.667791 4767 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 24 23:34:44 crc kubenswrapper[4767]: I1124 23:34:44.696479 4767 csr.go:261] certificate signing request csr-z9klg is approved, waiting to be issued Nov 24 23:34:44 crc kubenswrapper[4767]: I1124 23:34:44.705775 4767 csr.go:257] certificate signing request csr-z9klg is issued Nov 24 23:34:45 crc kubenswrapper[4767]: I1124 23:34:45.709070 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-24 23:29:44 +0000 UTC, rotation deadline is 2026-09-20 15:02:01.161954809 +0000 UTC Nov 24 23:34:45 crc kubenswrapper[4767]: I1124 23:34:45.710630 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7191h27m15.451342578s for next certificate rotation Nov 24 23:34:49 crc kubenswrapper[4767]: I1124 23:34:49.003068 4767 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 24 23:35:05 crc kubenswrapper[4767]: I1124 23:35:05.481864 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:35:05 crc kubenswrapper[4767]: I1124 23:35:05.482820 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:35:35 crc kubenswrapper[4767]: I1124 23:35:35.482145 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:35:35 crc kubenswrapper[4767]: I1124 23:35:35.483000 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:35:46 crc kubenswrapper[4767]: I1124 23:35:46.444293 4767 scope.go:117] "RemoveContainer" containerID="6bc1e72eb6aa3156128a8f2d7e6acf2b65cc65b68f6a9f51995e1264b0dca9c4" Nov 24 23:35:46 crc kubenswrapper[4767]: I1124 23:35:46.503383 4767 scope.go:117] "RemoveContainer" containerID="8f85662032e9be473ba58f6fe190b8c004a22e64c4fa7adfd5f2e5854fb2f80a" Nov 24 23:35:46 crc kubenswrapper[4767]: I1124 23:35:46.543795 4767 scope.go:117] "RemoveContainer" containerID="fbbe543c7072719c949bd4cd6b1eecf5499d2316f1fb6743b28cfe9c9c142152" Nov 24 23:36:05 crc kubenswrapper[4767]: I1124 23:36:05.481466 4767 patch_prober.go:28] interesting pod/machine-config-daemon-74ffd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:36:05 crc kubenswrapper[4767]: I1124 23:36:05.482240 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:36:05 crc kubenswrapper[4767]: I1124 23:36:05.482330 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" Nov 24 23:36:05 crc kubenswrapper[4767]: I1124 23:36:05.483432 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f"} pod="openshift-machine-config-operator/machine-config-daemon-74ffd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:36:05 crc kubenswrapper[4767]: I1124 23:36:05.483537 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerName="machine-config-daemon" containerID="cri-o://5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" gracePeriod=600 Nov 24 23:36:05 crc kubenswrapper[4767]: E1124 23:36:05.644947 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:36:06 crc kubenswrapper[4767]: I1124 23:36:06.250675 4767 generic.go:334] "Generic (PLEG): container finished" podID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" exitCode=0 Nov 24 23:36:06 crc kubenswrapper[4767]: I1124 23:36:06.250804 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerDied","Data":"5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f"} Nov 24 23:36:06 crc kubenswrapper[4767]: I1124 23:36:06.251119 4767 scope.go:117] "RemoveContainer" containerID="5c7cc72074d182d5318650835206882a6f9a9af381df20391df98140e9145d85" Nov 24 23:36:06 crc kubenswrapper[4767]: I1124 23:36:06.253758 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:36:06 crc kubenswrapper[4767]: E1124 23:36:06.254463 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:36:19 crc kubenswrapper[4767]: I1124 23:36:19.314393 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:36:19 crc kubenswrapper[4767]: E1124 23:36:19.315562 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:36:21 crc kubenswrapper[4767]: I1124 23:36:21.453264 4767 generic.go:334] "Generic (PLEG): container finished" podID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerID="90d6cae2bd518dd4ae469c74d35fc3f6e1da72bab16223586b5e4bc3cdae9580" exitCode=0 Nov 24 23:36:21 crc kubenswrapper[4767]: I1124 23:36:21.453679 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xf7h4/must-gather-jffcq" event={"ID":"7a93d272-c118-4fa4-9e21-608657fd04a0","Type":"ContainerDied","Data":"90d6cae2bd518dd4ae469c74d35fc3f6e1da72bab16223586b5e4bc3cdae9580"} Nov 24 23:36:21 crc kubenswrapper[4767]: I1124 23:36:21.456556 4767 scope.go:117] "RemoveContainer" containerID="90d6cae2bd518dd4ae469c74d35fc3f6e1da72bab16223586b5e4bc3cdae9580" Nov 24 23:36:22 crc kubenswrapper[4767]: I1124 23:36:22.045683 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xf7h4_must-gather-jffcq_7a93d272-c118-4fa4-9e21-608657fd04a0/gather/0.log" Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.313624 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:36:30 crc kubenswrapper[4767]: E1124 23:36:30.314797 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.330066 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xf7h4/must-gather-jffcq"] Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.330423 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xf7h4/must-gather-jffcq" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerName="copy" containerID="cri-o://abec92cf451669e9fd62dd4e1f8bd9f62c87281384423028b0fb17b421053687" gracePeriod=2 Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.341839 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xf7h4/must-gather-jffcq"] Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.560238 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xf7h4_must-gather-jffcq_7a93d272-c118-4fa4-9e21-608657fd04a0/copy/0.log" Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.560770 4767 generic.go:334] "Generic (PLEG): container finished" podID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerID="abec92cf451669e9fd62dd4e1f8bd9f62c87281384423028b0fb17b421053687" exitCode=143 Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.853648 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xf7h4_must-gather-jffcq_7a93d272-c118-4fa4-9e21-608657fd04a0/copy/0.log" Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.854295 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.979713 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwwqj\" (UniqueName: \"kubernetes.io/projected/7a93d272-c118-4fa4-9e21-608657fd04a0-kube-api-access-nwwqj\") pod \"7a93d272-c118-4fa4-9e21-608657fd04a0\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.979849 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a93d272-c118-4fa4-9e21-608657fd04a0-must-gather-output\") pod \"7a93d272-c118-4fa4-9e21-608657fd04a0\" (UID: \"7a93d272-c118-4fa4-9e21-608657fd04a0\") " Nov 24 23:36:30 crc kubenswrapper[4767]: I1124 23:36:30.988750 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a93d272-c118-4fa4-9e21-608657fd04a0-kube-api-access-nwwqj" (OuterVolumeSpecName: "kube-api-access-nwwqj") pod "7a93d272-c118-4fa4-9e21-608657fd04a0" (UID: "7a93d272-c118-4fa4-9e21-608657fd04a0"). InnerVolumeSpecName "kube-api-access-nwwqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:36:31 crc kubenswrapper[4767]: I1124 23:36:31.082522 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwwqj\" (UniqueName: \"kubernetes.io/projected/7a93d272-c118-4fa4-9e21-608657fd04a0-kube-api-access-nwwqj\") on node \"crc\" DevicePath \"\"" Nov 24 23:36:31 crc kubenswrapper[4767]: I1124 23:36:31.163982 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a93d272-c118-4fa4-9e21-608657fd04a0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7a93d272-c118-4fa4-9e21-608657fd04a0" (UID: "7a93d272-c118-4fa4-9e21-608657fd04a0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:36:31 crc kubenswrapper[4767]: I1124 23:36:31.185302 4767 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7a93d272-c118-4fa4-9e21-608657fd04a0-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 23:36:31 crc kubenswrapper[4767]: I1124 23:36:31.570593 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xf7h4_must-gather-jffcq_7a93d272-c118-4fa4-9e21-608657fd04a0/copy/0.log" Nov 24 23:36:31 crc kubenswrapper[4767]: I1124 23:36:31.571333 4767 scope.go:117] "RemoveContainer" containerID="abec92cf451669e9fd62dd4e1f8bd9f62c87281384423028b0fb17b421053687" Nov 24 23:36:31 crc kubenswrapper[4767]: I1124 23:36:31.571404 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xf7h4/must-gather-jffcq" Nov 24 23:36:31 crc kubenswrapper[4767]: I1124 23:36:31.593975 4767 scope.go:117] "RemoveContainer" containerID="90d6cae2bd518dd4ae469c74d35fc3f6e1da72bab16223586b5e4bc3cdae9580" Nov 24 23:36:32 crc kubenswrapper[4767]: I1124 23:36:32.336824 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" path="/var/lib/kubelet/pods/7a93d272-c118-4fa4-9e21-608657fd04a0/volumes" Nov 24 23:36:41 crc kubenswrapper[4767]: I1124 23:36:41.315719 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:36:41 crc kubenswrapper[4767]: E1124 23:36:41.318972 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.526847 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4m6n"] Nov 24 23:36:53 crc kubenswrapper[4767]: E1124 23:36:53.528236 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="registry-server" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528257 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="registry-server" Nov 24 23:36:53 crc kubenswrapper[4767]: E1124 23:36:53.528305 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerName="copy" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528315 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerName="copy" Nov 24 23:36:53 crc kubenswrapper[4767]: E1124 23:36:53.528339 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="extract-content" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528347 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="extract-content" Nov 24 23:36:53 crc kubenswrapper[4767]: E1124 23:36:53.528364 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="extract-utilities" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528372 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="extract-utilities" Nov 24 23:36:53 crc kubenswrapper[4767]: E1124 23:36:53.528397 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerName="gather" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528404 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerName="gather" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528663 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerName="copy" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528683 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a93d272-c118-4fa4-9e21-608657fd04a0" containerName="gather" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.528711 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="75efd199-8f88-4a24-9020-465749614adb" containerName="registry-server" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.530443 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.550911 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4m6n"] Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.651609 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqplk\" (UniqueName: \"kubernetes.io/projected/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-kube-api-access-qqplk\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.651683 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-utilities\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.651762 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-catalog-content\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.754050 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-utilities\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.754178 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-catalog-content\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.754356 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqplk\" (UniqueName: \"kubernetes.io/projected/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-kube-api-access-qqplk\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.754598 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-utilities\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.754598 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-catalog-content\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.775208 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqplk\" (UniqueName: \"kubernetes.io/projected/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-kube-api-access-qqplk\") pod \"redhat-operators-f4m6n\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:53 crc kubenswrapper[4767]: I1124 23:36:53.874724 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:36:54 crc kubenswrapper[4767]: I1124 23:36:54.340294 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4m6n"] Nov 24 23:36:54 crc kubenswrapper[4767]: E1124 23:36:54.802464 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75164bf8_2c0c_46a1_bcbf_bbcee386a0f5.slice/crio-conmon-897ebe3d21b254efed96c6668f404c9d575c848fb251a11996c1c684b29bfb69.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75164bf8_2c0c_46a1_bcbf_bbcee386a0f5.slice/crio-897ebe3d21b254efed96c6668f404c9d575c848fb251a11996c1c684b29bfb69.scope\": RecentStats: unable to find data in memory cache]" Nov 24 23:36:54 crc kubenswrapper[4767]: I1124 23:36:54.861312 4767 generic.go:334] "Generic (PLEG): container finished" podID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerID="897ebe3d21b254efed96c6668f404c9d575c848fb251a11996c1c684b29bfb69" exitCode=0 Nov 24 23:36:54 crc kubenswrapper[4767]: I1124 23:36:54.861363 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4m6n" event={"ID":"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5","Type":"ContainerDied","Data":"897ebe3d21b254efed96c6668f404c9d575c848fb251a11996c1c684b29bfb69"} Nov 24 23:36:54 crc kubenswrapper[4767]: I1124 23:36:54.861408 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4m6n" event={"ID":"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5","Type":"ContainerStarted","Data":"f40f76635cefb330b637c575bbc07ab928c0c1b149a69ccdc27a1b52794b91fc"} Nov 24 23:36:54 crc kubenswrapper[4767]: I1124 23:36:54.863244 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:36:55 crc kubenswrapper[4767]: I1124 23:36:55.873703 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4m6n" event={"ID":"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5","Type":"ContainerStarted","Data":"2f675a19005bc8b6ee8ef80835d9c17b81a04f213df77e596a5604b1d9bd9c44"} Nov 24 23:36:56 crc kubenswrapper[4767]: I1124 23:36:56.314199 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:36:56 crc kubenswrapper[4767]: E1124 23:36:56.314795 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:36:56 crc kubenswrapper[4767]: I1124 23:36:56.888785 4767 generic.go:334] "Generic (PLEG): container finished" podID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerID="2f675a19005bc8b6ee8ef80835d9c17b81a04f213df77e596a5604b1d9bd9c44" exitCode=0 Nov 24 23:36:56 crc kubenswrapper[4767]: I1124 23:36:56.888848 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4m6n" event={"ID":"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5","Type":"ContainerDied","Data":"2f675a19005bc8b6ee8ef80835d9c17b81a04f213df77e596a5604b1d9bd9c44"} Nov 24 23:36:57 crc kubenswrapper[4767]: I1124 23:36:57.919409 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4m6n" event={"ID":"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5","Type":"ContainerStarted","Data":"a363e570fd31bd3e356169913cf271b75f64d2e9497ce83f1db1d619bc0619db"} Nov 24 23:36:57 crc kubenswrapper[4767]: I1124 23:36:57.943531 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4m6n" podStartSLOduration=2.47503174 podStartE2EDuration="4.943515441s" podCreationTimestamp="2025-11-24 23:36:53 +0000 UTC" firstStartedPulling="2025-11-24 23:36:54.863031572 +0000 UTC m=+7097.780014944" lastFinishedPulling="2025-11-24 23:36:57.331515253 +0000 UTC m=+7100.248498645" observedRunningTime="2025-11-24 23:36:57.938286443 +0000 UTC m=+7100.855269835" watchObservedRunningTime="2025-11-24 23:36:57.943515441 +0000 UTC m=+7100.860498813" Nov 24 23:37:03 crc kubenswrapper[4767]: I1124 23:37:03.875218 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:37:03 crc kubenswrapper[4767]: I1124 23:37:03.876001 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:37:03 crc kubenswrapper[4767]: I1124 23:37:03.966244 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:37:04 crc kubenswrapper[4767]: I1124 23:37:04.045620 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:37:04 crc kubenswrapper[4767]: I1124 23:37:04.205331 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4m6n"] Nov 24 23:37:06 crc kubenswrapper[4767]: I1124 23:37:06.002500 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4m6n" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="registry-server" containerID="cri-o://a363e570fd31bd3e356169913cf271b75f64d2e9497ce83f1db1d619bc0619db" gracePeriod=2 Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.035005 4767 generic.go:334] "Generic (PLEG): container finished" podID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerID="a363e570fd31bd3e356169913cf271b75f64d2e9497ce83f1db1d619bc0619db" exitCode=0 Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.035103 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4m6n" event={"ID":"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5","Type":"ContainerDied","Data":"a363e570fd31bd3e356169913cf271b75f64d2e9497ce83f1db1d619bc0619db"} Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.378846 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.560255 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-utilities\") pod \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.560762 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqplk\" (UniqueName: \"kubernetes.io/projected/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-kube-api-access-qqplk\") pod \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.561006 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-catalog-content\") pod \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\" (UID: \"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5\") " Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.562070 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-utilities" (OuterVolumeSpecName: "utilities") pod "75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" (UID: "75164bf8-2c0c-46a1-bcbf-bbcee386a0f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.569766 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-kube-api-access-qqplk" (OuterVolumeSpecName: "kube-api-access-qqplk") pod "75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" (UID: "75164bf8-2c0c-46a1-bcbf-bbcee386a0f5"). InnerVolumeSpecName "kube-api-access-qqplk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.663382 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.663422 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqplk\" (UniqueName: \"kubernetes.io/projected/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-kube-api-access-qqplk\") on node \"crc\" DevicePath \"\"" Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.672694 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" (UID: "75164bf8-2c0c-46a1-bcbf-bbcee386a0f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:37:08 crc kubenswrapper[4767]: I1124 23:37:08.766046 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.055769 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4m6n" event={"ID":"75164bf8-2c0c-46a1-bcbf-bbcee386a0f5","Type":"ContainerDied","Data":"f40f76635cefb330b637c575bbc07ab928c0c1b149a69ccdc27a1b52794b91fc"} Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.055988 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4m6n" Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.064543 4767 scope.go:117] "RemoveContainer" containerID="a363e570fd31bd3e356169913cf271b75f64d2e9497ce83f1db1d619bc0619db" Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.119719 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4m6n"] Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.119763 4767 scope.go:117] "RemoveContainer" containerID="2f675a19005bc8b6ee8ef80835d9c17b81a04f213df77e596a5604b1d9bd9c44" Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.130014 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4m6n"] Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.143075 4767 scope.go:117] "RemoveContainer" containerID="897ebe3d21b254efed96c6668f404c9d575c848fb251a11996c1c684b29bfb69" Nov 24 23:37:09 crc kubenswrapper[4767]: I1124 23:37:09.313197 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:37:09 crc kubenswrapper[4767]: E1124 23:37:09.313612 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:37:10 crc kubenswrapper[4767]: I1124 23:37:10.330050 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" path="/var/lib/kubelet/pods/75164bf8-2c0c-46a1-bcbf-bbcee386a0f5/volumes" Nov 24 23:37:23 crc kubenswrapper[4767]: I1124 23:37:23.314430 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:37:23 crc kubenswrapper[4767]: E1124 23:37:23.316748 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:37:38 crc kubenswrapper[4767]: I1124 23:37:38.331137 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:37:38 crc kubenswrapper[4767]: E1124 23:37:38.332439 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:37:46 crc kubenswrapper[4767]: I1124 23:37:46.665827 4767 scope.go:117] "RemoveContainer" containerID="5c02dafd4694a9b4b8044a4c6fede0cfc4fb32c28cc045581bdffac7b4c80315" Nov 24 23:37:53 crc kubenswrapper[4767]: I1124 23:37:53.313876 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:37:53 crc kubenswrapper[4767]: E1124 23:37:53.314641 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:38:06 crc kubenswrapper[4767]: I1124 23:38:06.314553 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:38:06 crc kubenswrapper[4767]: E1124 23:38:06.316356 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:38:21 crc kubenswrapper[4767]: I1124 23:38:21.315347 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:38:21 crc kubenswrapper[4767]: E1124 23:38:21.316334 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:38:36 crc kubenswrapper[4767]: I1124 23:38:36.314079 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:38:36 crc kubenswrapper[4767]: E1124 23:38:36.314985 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:38:48 crc kubenswrapper[4767]: I1124 23:38:48.314158 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:38:48 crc kubenswrapper[4767]: E1124 23:38:48.315219 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:39:01 crc kubenswrapper[4767]: I1124 23:39:01.313162 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:39:01 crc kubenswrapper[4767]: E1124 23:39:01.314017 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:39:14 crc kubenswrapper[4767]: I1124 23:39:14.314205 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:39:14 crc kubenswrapper[4767]: E1124 23:39:14.316850 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:39:27 crc kubenswrapper[4767]: I1124 23:39:27.314342 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:39:27 crc kubenswrapper[4767]: E1124 23:39:27.315435 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:39:42 crc kubenswrapper[4767]: I1124 23:39:42.313930 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:39:42 crc kubenswrapper[4767]: E1124 23:39:42.314968 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:39:44 crc kubenswrapper[4767]: I1124 23:39:44.650501 4767 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 24 23:39:55 crc kubenswrapper[4767]: I1124 23:39:55.313104 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:39:55 crc kubenswrapper[4767]: E1124 23:39:55.313952 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:40:07 crc kubenswrapper[4767]: I1124 23:40:07.313726 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:40:07 crc kubenswrapper[4767]: E1124 23:40:07.314871 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:40:21 crc kubenswrapper[4767]: I1124 23:40:21.314772 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:40:21 crc kubenswrapper[4767]: E1124 23:40:21.315925 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.313969 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dqqfh"] Nov 24 23:40:25 crc kubenswrapper[4767]: E1124 23:40:25.315086 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="registry-server" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.315104 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="registry-server" Nov 24 23:40:25 crc kubenswrapper[4767]: E1124 23:40:25.315143 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="extract-utilities" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.315153 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="extract-utilities" Nov 24 23:40:25 crc kubenswrapper[4767]: E1124 23:40:25.315171 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="extract-content" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.315179 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="extract-content" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.315431 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="75164bf8-2c0c-46a1-bcbf-bbcee386a0f5" containerName="registry-server" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.317305 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.333023 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqqfh"] Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.442497 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e107e68-320f-4213-b30b-eb239449d7d7-catalog-content\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.443108 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdxr\" (UniqueName: \"kubernetes.io/projected/5e107e68-320f-4213-b30b-eb239449d7d7-kube-api-access-vkdxr\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.443462 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e107e68-320f-4213-b30b-eb239449d7d7-utilities\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.544946 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e107e68-320f-4213-b30b-eb239449d7d7-utilities\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.545026 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e107e68-320f-4213-b30b-eb239449d7d7-catalog-content\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.545222 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdxr\" (UniqueName: \"kubernetes.io/projected/5e107e68-320f-4213-b30b-eb239449d7d7-kube-api-access-vkdxr\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.545739 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e107e68-320f-4213-b30b-eb239449d7d7-catalog-content\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.545858 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e107e68-320f-4213-b30b-eb239449d7d7-utilities\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.574463 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdxr\" (UniqueName: \"kubernetes.io/projected/5e107e68-320f-4213-b30b-eb239449d7d7-kube-api-access-vkdxr\") pod \"certified-operators-dqqfh\" (UID: \"5e107e68-320f-4213-b30b-eb239449d7d7\") " pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:25 crc kubenswrapper[4767]: I1124 23:40:25.685620 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:26 crc kubenswrapper[4767]: I1124 23:40:26.312191 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqqfh"] Nov 24 23:40:26 crc kubenswrapper[4767]: I1124 23:40:26.456013 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqqfh" event={"ID":"5e107e68-320f-4213-b30b-eb239449d7d7","Type":"ContainerStarted","Data":"9db77b33e8fe14ef9f3976d7b6f04416ce8d46858686971f1eda56e15a610dee"} Nov 24 23:40:27 crc kubenswrapper[4767]: I1124 23:40:27.472905 4767 generic.go:334] "Generic (PLEG): container finished" podID="5e107e68-320f-4213-b30b-eb239449d7d7" containerID="6935c70b904eba33293c9e164ffdc11c9ba4db87bb171cc79d92e8f832037a42" exitCode=0 Nov 24 23:40:27 crc kubenswrapper[4767]: I1124 23:40:27.472960 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqqfh" event={"ID":"5e107e68-320f-4213-b30b-eb239449d7d7","Type":"ContainerDied","Data":"6935c70b904eba33293c9e164ffdc11c9ba4db87bb171cc79d92e8f832037a42"} Nov 24 23:40:32 crc kubenswrapper[4767]: I1124 23:40:32.524354 4767 generic.go:334] "Generic (PLEG): container finished" podID="5e107e68-320f-4213-b30b-eb239449d7d7" containerID="79f5e549c7f98ef7bcebf15e800ce47902006d7f6b1c8cfde361a33e09b3b820" exitCode=0 Nov 24 23:40:32 crc kubenswrapper[4767]: I1124 23:40:32.524437 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqqfh" event={"ID":"5e107e68-320f-4213-b30b-eb239449d7d7","Type":"ContainerDied","Data":"79f5e549c7f98ef7bcebf15e800ce47902006d7f6b1c8cfde361a33e09b3b820"} Nov 24 23:40:33 crc kubenswrapper[4767]: I1124 23:40:33.537547 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqqfh" event={"ID":"5e107e68-320f-4213-b30b-eb239449d7d7","Type":"ContainerStarted","Data":"76a77648ee2584a629a9b1a4cc1667718037e0a19d45a281586f13e774aa5146"} Nov 24 23:40:33 crc kubenswrapper[4767]: I1124 23:40:33.563320 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dqqfh" podStartSLOduration=2.8983026560000003 podStartE2EDuration="8.563300972s" podCreationTimestamp="2025-11-24 23:40:25 +0000 UTC" firstStartedPulling="2025-11-24 23:40:27.476114979 +0000 UTC m=+7310.393098361" lastFinishedPulling="2025-11-24 23:40:33.141113285 +0000 UTC m=+7316.058096677" observedRunningTime="2025-11-24 23:40:33.559591947 +0000 UTC m=+7316.476575339" watchObservedRunningTime="2025-11-24 23:40:33.563300972 +0000 UTC m=+7316.480284354" Nov 24 23:40:34 crc kubenswrapper[4767]: I1124 23:40:34.314134 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:40:34 crc kubenswrapper[4767]: E1124 23:40:34.314578 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:40:35 crc kubenswrapper[4767]: I1124 23:40:35.686442 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:35 crc kubenswrapper[4767]: I1124 23:40:35.686957 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:35 crc kubenswrapper[4767]: I1124 23:40:35.771107 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.673657 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rwlxr"] Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.676299 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.701207 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwlxr"] Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.726058 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-catalog-content\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.726138 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-utilities\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.726328 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqv5\" (UniqueName: \"kubernetes.io/projected/e347e2b0-6bbc-432b-b145-31bce6a91b08-kube-api-access-xfqv5\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.828737 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-catalog-content\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.828802 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-utilities\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.828948 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfqv5\" (UniqueName: \"kubernetes.io/projected/e347e2b0-6bbc-432b-b145-31bce6a91b08-kube-api-access-xfqv5\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.829457 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-catalog-content\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.829527 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-utilities\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:36 crc kubenswrapper[4767]: I1124 23:40:36.847866 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfqv5\" (UniqueName: \"kubernetes.io/projected/e347e2b0-6bbc-432b-b145-31bce6a91b08-kube-api-access-xfqv5\") pod \"redhat-marketplace-rwlxr\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:37 crc kubenswrapper[4767]: I1124 23:40:37.030776 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:37 crc kubenswrapper[4767]: I1124 23:40:37.535731 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwlxr"] Nov 24 23:40:37 crc kubenswrapper[4767]: I1124 23:40:37.587332 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwlxr" event={"ID":"e347e2b0-6bbc-432b-b145-31bce6a91b08","Type":"ContainerStarted","Data":"d595f0d964e8c8ca640c25d804b8beab1a360cbf7c6b098c4e418496cbb5dcca"} Nov 24 23:40:38 crc kubenswrapper[4767]: I1124 23:40:38.596512 4767 generic.go:334] "Generic (PLEG): container finished" podID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerID="bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7" exitCode=0 Nov 24 23:40:38 crc kubenswrapper[4767]: I1124 23:40:38.596573 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwlxr" event={"ID":"e347e2b0-6bbc-432b-b145-31bce6a91b08","Type":"ContainerDied","Data":"bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7"} Nov 24 23:40:39 crc kubenswrapper[4767]: I1124 23:40:39.610446 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwlxr" event={"ID":"e347e2b0-6bbc-432b-b145-31bce6a91b08","Type":"ContainerStarted","Data":"f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d"} Nov 24 23:40:40 crc kubenswrapper[4767]: I1124 23:40:40.622871 4767 generic.go:334] "Generic (PLEG): container finished" podID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerID="f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d" exitCode=0 Nov 24 23:40:40 crc kubenswrapper[4767]: I1124 23:40:40.623090 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwlxr" event={"ID":"e347e2b0-6bbc-432b-b145-31bce6a91b08","Type":"ContainerDied","Data":"f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d"} Nov 24 23:40:41 crc kubenswrapper[4767]: I1124 23:40:41.642955 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwlxr" event={"ID":"e347e2b0-6bbc-432b-b145-31bce6a91b08","Type":"ContainerStarted","Data":"eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398"} Nov 24 23:40:41 crc kubenswrapper[4767]: I1124 23:40:41.672111 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rwlxr" podStartSLOduration=3.09631775 podStartE2EDuration="5.672089812s" podCreationTimestamp="2025-11-24 23:40:36 +0000 UTC" firstStartedPulling="2025-11-24 23:40:38.599282139 +0000 UTC m=+7321.516265511" lastFinishedPulling="2025-11-24 23:40:41.175054171 +0000 UTC m=+7324.092037573" observedRunningTime="2025-11-24 23:40:41.662915073 +0000 UTC m=+7324.579898485" watchObservedRunningTime="2025-11-24 23:40:41.672089812 +0000 UTC m=+7324.589073184" Nov 24 23:40:45 crc kubenswrapper[4767]: I1124 23:40:45.741244 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dqqfh" Nov 24 23:40:45 crc kubenswrapper[4767]: I1124 23:40:45.824349 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqqfh"] Nov 24 23:40:45 crc kubenswrapper[4767]: I1124 23:40:45.879999 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk69f"] Nov 24 23:40:45 crc kubenswrapper[4767]: I1124 23:40:45.880345 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dk69f" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="registry-server" containerID="cri-o://9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2" gracePeriod=2 Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.410427 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.454136 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-utilities\") pod \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.454359 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxtvg\" (UniqueName: \"kubernetes.io/projected/255d133f-2de5-4b7d-a1dc-9091d0bd6580-kube-api-access-dxtvg\") pod \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.454402 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-catalog-content\") pod \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\" (UID: \"255d133f-2de5-4b7d-a1dc-9091d0bd6580\") " Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.454702 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-utilities" (OuterVolumeSpecName: "utilities") pod "255d133f-2de5-4b7d-a1dc-9091d0bd6580" (UID: "255d133f-2de5-4b7d-a1dc-9091d0bd6580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.473671 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255d133f-2de5-4b7d-a1dc-9091d0bd6580-kube-api-access-dxtvg" (OuterVolumeSpecName: "kube-api-access-dxtvg") pod "255d133f-2de5-4b7d-a1dc-9091d0bd6580" (UID: "255d133f-2de5-4b7d-a1dc-9091d0bd6580"). InnerVolumeSpecName "kube-api-access-dxtvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.513421 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "255d133f-2de5-4b7d-a1dc-9091d0bd6580" (UID: "255d133f-2de5-4b7d-a1dc-9091d0bd6580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.556723 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.556756 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxtvg\" (UniqueName: \"kubernetes.io/projected/255d133f-2de5-4b7d-a1dc-9091d0bd6580-kube-api-access-dxtvg\") on node \"crc\" DevicePath \"\"" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.556768 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255d133f-2de5-4b7d-a1dc-9091d0bd6580-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.696086 4767 generic.go:334] "Generic (PLEG): container finished" podID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerID="9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2" exitCode=0 Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.696162 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk69f" event={"ID":"255d133f-2de5-4b7d-a1dc-9091d0bd6580","Type":"ContainerDied","Data":"9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2"} Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.696202 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk69f" event={"ID":"255d133f-2de5-4b7d-a1dc-9091d0bd6580","Type":"ContainerDied","Data":"d290f116276b57c6e92c9b56d3158b8a180a8fde038df5f74cf89d6f441471f8"} Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.696220 4767 scope.go:117] "RemoveContainer" containerID="9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.696335 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk69f" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.721225 4767 scope.go:117] "RemoveContainer" containerID="30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.737198 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk69f"] Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.741999 4767 scope.go:117] "RemoveContainer" containerID="b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.756644 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dk69f"] Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.795480 4767 scope.go:117] "RemoveContainer" containerID="9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2" Nov 24 23:40:46 crc kubenswrapper[4767]: E1124 23:40:46.795943 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2\": container with ID starting with 9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2 not found: ID does not exist" containerID="9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.795986 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2"} err="failed to get container status \"9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2\": rpc error: code = NotFound desc = could not find container \"9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2\": container with ID starting with 9da0d8d0df5e6e54924cc110e06d0f4af1d74e5c1b5d211f1c725f8f7093caa2 not found: ID does not exist" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.796016 4767 scope.go:117] "RemoveContainer" containerID="30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4" Nov 24 23:40:46 crc kubenswrapper[4767]: E1124 23:40:46.796330 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4\": container with ID starting with 30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4 not found: ID does not exist" containerID="30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.796361 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4"} err="failed to get container status \"30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4\": rpc error: code = NotFound desc = could not find container \"30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4\": container with ID starting with 30eaa2bbce6b7ad9ddbd59c668d705d0e5fcaf625c67c58ae3452c2cff561fe4 not found: ID does not exist" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.796383 4767 scope.go:117] "RemoveContainer" containerID="b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a" Nov 24 23:40:46 crc kubenswrapper[4767]: E1124 23:40:46.796680 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a\": container with ID starting with b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a not found: ID does not exist" containerID="b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a" Nov 24 23:40:46 crc kubenswrapper[4767]: I1124 23:40:46.796703 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a"} err="failed to get container status \"b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a\": rpc error: code = NotFound desc = could not find container \"b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a\": container with ID starting with b38dcafb1c6a4ecedc5186322d182b633f7328def3b604aa28e2765ff0347b3a not found: ID does not exist" Nov 24 23:40:47 crc kubenswrapper[4767]: I1124 23:40:47.031714 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:47 crc kubenswrapper[4767]: I1124 23:40:47.031940 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:47 crc kubenswrapper[4767]: I1124 23:40:47.091394 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:47 crc kubenswrapper[4767]: I1124 23:40:47.800252 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:48 crc kubenswrapper[4767]: I1124 23:40:48.327084 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" path="/var/lib/kubelet/pods/255d133f-2de5-4b7d-a1dc-9091d0bd6580/volumes" Nov 24 23:40:49 crc kubenswrapper[4767]: I1124 23:40:49.314235 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:40:49 crc kubenswrapper[4767]: E1124 23:40:49.314800 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:40:49 crc kubenswrapper[4767]: I1124 23:40:49.378168 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwlxr"] Nov 24 23:40:50 crc kubenswrapper[4767]: I1124 23:40:50.748007 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rwlxr" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="registry-server" containerID="cri-o://eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398" gracePeriod=2 Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.328111 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.376140 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfqv5\" (UniqueName: \"kubernetes.io/projected/e347e2b0-6bbc-432b-b145-31bce6a91b08-kube-api-access-xfqv5\") pod \"e347e2b0-6bbc-432b-b145-31bce6a91b08\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.376385 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-utilities\") pod \"e347e2b0-6bbc-432b-b145-31bce6a91b08\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.376482 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-catalog-content\") pod \"e347e2b0-6bbc-432b-b145-31bce6a91b08\" (UID: \"e347e2b0-6bbc-432b-b145-31bce6a91b08\") " Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.377762 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-utilities" (OuterVolumeSpecName: "utilities") pod "e347e2b0-6bbc-432b-b145-31bce6a91b08" (UID: "e347e2b0-6bbc-432b-b145-31bce6a91b08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.389384 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e347e2b0-6bbc-432b-b145-31bce6a91b08-kube-api-access-xfqv5" (OuterVolumeSpecName: "kube-api-access-xfqv5") pod "e347e2b0-6bbc-432b-b145-31bce6a91b08" (UID: "e347e2b0-6bbc-432b-b145-31bce6a91b08"). InnerVolumeSpecName "kube-api-access-xfqv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.400828 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e347e2b0-6bbc-432b-b145-31bce6a91b08" (UID: "e347e2b0-6bbc-432b-b145-31bce6a91b08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.479616 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfqv5\" (UniqueName: \"kubernetes.io/projected/e347e2b0-6bbc-432b-b145-31bce6a91b08-kube-api-access-xfqv5\") on node \"crc\" DevicePath \"\"" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.479650 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.479662 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e347e2b0-6bbc-432b-b145-31bce6a91b08-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.768523 4767 generic.go:334] "Generic (PLEG): container finished" podID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerID="eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398" exitCode=0 Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.768973 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwlxr" event={"ID":"e347e2b0-6bbc-432b-b145-31bce6a91b08","Type":"ContainerDied","Data":"eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398"} Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.769020 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwlxr" event={"ID":"e347e2b0-6bbc-432b-b145-31bce6a91b08","Type":"ContainerDied","Data":"d595f0d964e8c8ca640c25d804b8beab1a360cbf7c6b098c4e418496cbb5dcca"} Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.769054 4767 scope.go:117] "RemoveContainer" containerID="eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.769172 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwlxr" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.810765 4767 scope.go:117] "RemoveContainer" containerID="f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.846406 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwlxr"] Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.851689 4767 scope.go:117] "RemoveContainer" containerID="bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.863032 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwlxr"] Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.885083 4767 scope.go:117] "RemoveContainer" containerID="eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398" Nov 24 23:40:51 crc kubenswrapper[4767]: E1124 23:40:51.885937 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398\": container with ID starting with eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398 not found: ID does not exist" containerID="eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.885984 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398"} err="failed to get container status \"eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398\": rpc error: code = NotFound desc = could not find container \"eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398\": container with ID starting with eaed03e36c2454610856030a24535a98a7066bf87ca91810c69afac9e328d398 not found: ID does not exist" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.886048 4767 scope.go:117] "RemoveContainer" containerID="f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d" Nov 24 23:40:51 crc kubenswrapper[4767]: E1124 23:40:51.886389 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d\": container with ID starting with f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d not found: ID does not exist" containerID="f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.886421 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d"} err="failed to get container status \"f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d\": rpc error: code = NotFound desc = could not find container \"f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d\": container with ID starting with f89294dc87d5d140472acfb93c48b82483646bfe6d865f5057d183c2c569ce3d not found: ID does not exist" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.886440 4767 scope.go:117] "RemoveContainer" containerID="bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7" Nov 24 23:40:51 crc kubenswrapper[4767]: E1124 23:40:51.886767 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7\": container with ID starting with bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7 not found: ID does not exist" containerID="bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7" Nov 24 23:40:51 crc kubenswrapper[4767]: I1124 23:40:51.886791 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7"} err="failed to get container status \"bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7\": rpc error: code = NotFound desc = could not find container \"bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7\": container with ID starting with bc154f4558e22cc20456187d3386d4b581e601c2c51028cf5ead508c6677b8e7 not found: ID does not exist" Nov 24 23:40:52 crc kubenswrapper[4767]: I1124 23:40:52.334222 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" path="/var/lib/kubelet/pods/e347e2b0-6bbc-432b-b145-31bce6a91b08/volumes" Nov 24 23:41:00 crc kubenswrapper[4767]: I1124 23:41:00.314332 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:41:00 crc kubenswrapper[4767]: E1124 23:41:00.316525 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-74ffd_openshift-machine-config-operator(7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0)\"" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" podUID="7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0" Nov 24 23:41:12 crc kubenswrapper[4767]: I1124 23:41:12.313193 4767 scope.go:117] "RemoveContainer" containerID="5d0e3c9771408ee3e35fd42d55637e00101a6a6ef138feea0760df502c751f2f" Nov 24 23:41:13 crc kubenswrapper[4767]: I1124 23:41:13.033335 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-74ffd" event={"ID":"7b0604ca-1caf-4d3d-bdaa-9bcb17ac0cf0","Type":"ContainerStarted","Data":"59ed09ade3dfb8caba5f52d9c68673175506405ea3ade7f37c6857d6e47941ba"} Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.908592 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pckm4"] Nov 24 23:42:49 crc kubenswrapper[4767]: E1124 23:42:49.909843 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="extract-utilities" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.909865 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="extract-utilities" Nov 24 23:42:49 crc kubenswrapper[4767]: E1124 23:42:49.909887 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="extract-content" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.909901 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="extract-content" Nov 24 23:42:49 crc kubenswrapper[4767]: E1124 23:42:49.909975 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="extract-utilities" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.909988 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="extract-utilities" Nov 24 23:42:49 crc kubenswrapper[4767]: E1124 23:42:49.910019 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="registry-server" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.910031 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="registry-server" Nov 24 23:42:49 crc kubenswrapper[4767]: E1124 23:42:49.910059 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="registry-server" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.910071 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="registry-server" Nov 24 23:42:49 crc kubenswrapper[4767]: E1124 23:42:49.910090 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="extract-content" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.910102 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="extract-content" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.910683 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="e347e2b0-6bbc-432b-b145-31bce6a91b08" containerName="registry-server" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.910735 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="255d133f-2de5-4b7d-a1dc-9091d0bd6580" containerName="registry-server" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.913405 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:49 crc kubenswrapper[4767]: I1124 23:42:49.934731 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pckm4"] Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.011575 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-utilities\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.012127 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-catalog-content\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.012312 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvwl\" (UniqueName: \"kubernetes.io/projected/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-kube-api-access-xzvwl\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.114357 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-catalog-content\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.114403 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvwl\" (UniqueName: \"kubernetes.io/projected/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-kube-api-access-xzvwl\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.114463 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-utilities\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.115009 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-catalog-content\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.115019 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-utilities\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.154509 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvwl\" (UniqueName: \"kubernetes.io/projected/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-kube-api-access-xzvwl\") pod \"community-operators-pckm4\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.251752 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:42:50 crc kubenswrapper[4767]: I1124 23:42:50.799322 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pckm4"] Nov 24 23:42:51 crc kubenswrapper[4767]: I1124 23:42:51.253138 4767 generic.go:334] "Generic (PLEG): container finished" podID="3b34e35d-607a-4c03-9122-c0f50c6ddcd3" containerID="039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c" exitCode=0 Nov 24 23:42:51 crc kubenswrapper[4767]: I1124 23:42:51.253334 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pckm4" event={"ID":"3b34e35d-607a-4c03-9122-c0f50c6ddcd3","Type":"ContainerDied","Data":"039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c"} Nov 24 23:42:51 crc kubenswrapper[4767]: I1124 23:42:51.253636 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pckm4" event={"ID":"3b34e35d-607a-4c03-9122-c0f50c6ddcd3","Type":"ContainerStarted","Data":"9922fb879f352267d21323d2320d6824668b55c8a3bb39635c8a1dc1fcdc48e2"} Nov 24 23:42:51 crc kubenswrapper[4767]: I1124 23:42:51.256362 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:42:52 crc kubenswrapper[4767]: I1124 23:42:52.266664 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pckm4" event={"ID":"3b34e35d-607a-4c03-9122-c0f50c6ddcd3","Type":"ContainerStarted","Data":"7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247"} Nov 24 23:42:53 crc kubenswrapper[4767]: I1124 23:42:53.290625 4767 generic.go:334] "Generic (PLEG): container finished" podID="3b34e35d-607a-4c03-9122-c0f50c6ddcd3" containerID="7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247" exitCode=0 Nov 24 23:42:53 crc kubenswrapper[4767]: I1124 23:42:53.290684 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pckm4" event={"ID":"3b34e35d-607a-4c03-9122-c0f50c6ddcd3","Type":"ContainerDied","Data":"7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247"} Nov 24 23:42:54 crc kubenswrapper[4767]: I1124 23:42:54.305773 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pckm4" event={"ID":"3b34e35d-607a-4c03-9122-c0f50c6ddcd3","Type":"ContainerStarted","Data":"1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00"} Nov 24 23:42:54 crc kubenswrapper[4767]: I1124 23:42:54.353508 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pckm4" podStartSLOduration=2.873900836 podStartE2EDuration="5.353471373s" podCreationTimestamp="2025-11-24 23:42:49 +0000 UTC" firstStartedPulling="2025-11-24 23:42:51.255992015 +0000 UTC m=+7454.172975397" lastFinishedPulling="2025-11-24 23:42:53.735562532 +0000 UTC m=+7456.652545934" observedRunningTime="2025-11-24 23:42:54.341152515 +0000 UTC m=+7457.258135927" watchObservedRunningTime="2025-11-24 23:42:54.353471373 +0000 UTC m=+7457.270454785" Nov 24 23:43:00 crc kubenswrapper[4767]: I1124 23:43:00.251995 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:43:00 crc kubenswrapper[4767]: I1124 23:43:00.254703 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:43:00 crc kubenswrapper[4767]: I1124 23:43:00.326651 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:43:00 crc kubenswrapper[4767]: I1124 23:43:00.454550 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:43:00 crc kubenswrapper[4767]: I1124 23:43:00.580930 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pckm4"] Nov 24 23:43:02 crc kubenswrapper[4767]: I1124 23:43:02.394871 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pckm4" podUID="3b34e35d-607a-4c03-9122-c0f50c6ddcd3" containerName="registry-server" containerID="cri-o://1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00" gracePeriod=2 Nov 24 23:43:02 crc kubenswrapper[4767]: I1124 23:43:02.980250 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.019058 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzvwl\" (UniqueName: \"kubernetes.io/projected/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-kube-api-access-xzvwl\") pod \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.019150 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-utilities\") pod \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.019234 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-catalog-content\") pod \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\" (UID: \"3b34e35d-607a-4c03-9122-c0f50c6ddcd3\") " Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.021453 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-utilities" (OuterVolumeSpecName: "utilities") pod "3b34e35d-607a-4c03-9122-c0f50c6ddcd3" (UID: "3b34e35d-607a-4c03-9122-c0f50c6ddcd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.024812 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-kube-api-access-xzvwl" (OuterVolumeSpecName: "kube-api-access-xzvwl") pod "3b34e35d-607a-4c03-9122-c0f50c6ddcd3" (UID: "3b34e35d-607a-4c03-9122-c0f50c6ddcd3"). InnerVolumeSpecName "kube-api-access-xzvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.064462 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b34e35d-607a-4c03-9122-c0f50c6ddcd3" (UID: "3b34e35d-607a-4c03-9122-c0f50c6ddcd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.121496 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzvwl\" (UniqueName: \"kubernetes.io/projected/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-kube-api-access-xzvwl\") on node \"crc\" DevicePath \"\"" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.121677 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.121737 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b34e35d-607a-4c03-9122-c0f50c6ddcd3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.409831 4767 generic.go:334] "Generic (PLEG): container finished" podID="3b34e35d-607a-4c03-9122-c0f50c6ddcd3" containerID="1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00" exitCode=0 Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.409873 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pckm4" event={"ID":"3b34e35d-607a-4c03-9122-c0f50c6ddcd3","Type":"ContainerDied","Data":"1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00"} Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.409928 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pckm4" event={"ID":"3b34e35d-607a-4c03-9122-c0f50c6ddcd3","Type":"ContainerDied","Data":"9922fb879f352267d21323d2320d6824668b55c8a3bb39635c8a1dc1fcdc48e2"} Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.409957 4767 scope.go:117] "RemoveContainer" containerID="1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.409957 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pckm4" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.445440 4767 scope.go:117] "RemoveContainer" containerID="7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.471759 4767 scope.go:117] "RemoveContainer" containerID="039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.472005 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pckm4"] Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.484021 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pckm4"] Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.536501 4767 scope.go:117] "RemoveContainer" containerID="1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00" Nov 24 23:43:03 crc kubenswrapper[4767]: E1124 23:43:03.536962 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00\": container with ID starting with 1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00 not found: ID does not exist" containerID="1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.537013 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00"} err="failed to get container status \"1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00\": rpc error: code = NotFound desc = could not find container \"1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00\": container with ID starting with 1bd6ccfe443059f2c4260f51652f8a687fa1d90192cf93fa14fea212ce271b00 not found: ID does not exist" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.537042 4767 scope.go:117] "RemoveContainer" containerID="7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247" Nov 24 23:43:03 crc kubenswrapper[4767]: E1124 23:43:03.537497 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247\": container with ID starting with 7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247 not found: ID does not exist" containerID="7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.537537 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247"} err="failed to get container status \"7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247\": rpc error: code = NotFound desc = could not find container \"7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247\": container with ID starting with 7fa7c3b67e00e96936500f32b92b50c6d330cff4bca0b32c7fcb3f87d79c5247 not found: ID does not exist" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.537565 4767 scope.go:117] "RemoveContainer" containerID="039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c" Nov 24 23:43:03 crc kubenswrapper[4767]: E1124 23:43:03.537890 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c\": container with ID starting with 039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c not found: ID does not exist" containerID="039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c" Nov 24 23:43:03 crc kubenswrapper[4767]: I1124 23:43:03.537916 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c"} err="failed to get container status \"039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c\": rpc error: code = NotFound desc = could not find container \"039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c\": container with ID starting with 039b814df7fa21daf44a6799e99f3ada97a3234e5f5408b4c05ae8851fd5f18c not found: ID does not exist" Nov 24 23:43:04 crc kubenswrapper[4767]: I1124 23:43:04.335466 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b34e35d-607a-4c03-9122-c0f50c6ddcd3" path="/var/lib/kubelet/pods/3b34e35d-607a-4c03-9122-c0f50c6ddcd3/volumes"